Nov 25 14:24:30 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 14:24:30 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:24:31 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 14:24:32 crc kubenswrapper[4796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:24:32 crc kubenswrapper[4796]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 14:24:32 crc kubenswrapper[4796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:24:32 crc kubenswrapper[4796]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:24:32 crc kubenswrapper[4796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 14:24:32 crc kubenswrapper[4796]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.133607 4796 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139470 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139502 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139512 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139521 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139532 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139545 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139569 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139602 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139611 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139618 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139626 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139634 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139642 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139650 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139658 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139665 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139673 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139681 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139689 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139697 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139704 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139716 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139726 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139734 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139741 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139749 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139757 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139764 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139771 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139779 4796 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139787 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139794 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139802 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139812 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139823 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139832 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139840 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139848 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139857 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139865 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139872 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139880 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139888 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139896 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139903 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139911 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139918 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139926 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139934 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139941 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139949 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139957 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139965 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139973 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139981 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139990 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.139999 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140006 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140014 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140021 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140031 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140043 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140051 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140060 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140068 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140076 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140083 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140091 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140098 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140106 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.140115 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141473 4796 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141496 4796 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141511 4796 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141522 4796 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141533 4796 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141542 4796 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141553 4796 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141564 4796 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141596 4796 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141605 4796 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141614 4796 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141624 4796 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141633 4796 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141642 4796 flags.go:64] FLAG: --cgroup-root="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141651 4796 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141660 4796 flags.go:64] FLAG: --client-ca-file="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141668 4796 flags.go:64] FLAG: --cloud-config="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141677 4796 flags.go:64] FLAG: --cloud-provider="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141686 4796 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141696 4796 flags.go:64] FLAG: --cluster-domain="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141705 4796 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141714 4796 flags.go:64] FLAG: --config-dir="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141722 4796 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141732 4796 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141743 4796 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141752 4796 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141761 4796 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141770 4796 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141780 4796 flags.go:64] FLAG: --contention-profiling="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141789 4796 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141798 4796 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141808 4796 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141816 4796 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141827 4796 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141836 4796 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141845 4796 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141864 4796 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141873 4796 flags.go:64] FLAG: --enable-server="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141882 4796 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141894 4796 flags.go:64] FLAG: --event-burst="100" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141903 4796 flags.go:64] FLAG: --event-qps="50" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141911 4796 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141920 4796 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141929 4796 flags.go:64] FLAG: --eviction-hard="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141940 4796 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141949 4796 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141957 4796 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141967 4796 flags.go:64] FLAG: --eviction-soft="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141977 4796 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.141987 4796 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142000 4796 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142012 4796 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142023 4796 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142036 4796 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142047 4796 flags.go:64] FLAG: --feature-gates="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142061 4796 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142073 4796 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142084 4796 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142093 4796 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142102 4796 flags.go:64] FLAG: --healthz-port="10248" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142111 4796 flags.go:64] FLAG: --help="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142120 4796 flags.go:64] FLAG: --hostname-override="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142129 4796 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142141 4796 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142150 4796 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142159 4796 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142167 4796 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142176 4796 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142184 4796 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142193 4796 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142202 4796 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142211 4796 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142222 4796 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142231 4796 flags.go:64] FLAG: --kube-reserved="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142240 4796 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142249 4796 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142258 4796 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142267 4796 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142276 4796 flags.go:64] FLAG: --lock-file="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142284 4796 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142293 4796 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142303 4796 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142315 4796 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142325 4796 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142336 4796 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142347 4796 flags.go:64] FLAG: --logging-format="text" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142358 4796 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142371 4796 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142381 4796 flags.go:64] FLAG: --manifest-url="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142394 4796 flags.go:64] FLAG: --manifest-url-header="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142408 4796 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142421 4796 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142435 4796 flags.go:64] FLAG: --max-pods="110" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142445 4796 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142454 4796 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142464 4796 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142473 4796 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142483 4796 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142491 4796 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142500 4796 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142519 4796 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142528 4796 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142537 4796 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142546 4796 flags.go:64] FLAG: --pod-cidr="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142554 4796 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142568 4796 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142604 4796 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142614 4796 flags.go:64] FLAG: --pods-per-core="0" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142624 4796 flags.go:64] FLAG: --port="10250" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142633 4796 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142642 4796 flags.go:64] FLAG: --provider-id="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142651 4796 flags.go:64] FLAG: --qos-reserved="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142660 4796 flags.go:64] FLAG: --read-only-port="10255" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142669 4796 flags.go:64] FLAG: --register-node="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142677 4796 flags.go:64] FLAG: --register-schedulable="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142686 4796 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142701 4796 flags.go:64] FLAG: --registry-burst="10" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142710 4796 flags.go:64] FLAG: --registry-qps="5" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142719 4796 flags.go:64] FLAG: --reserved-cpus="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142727 4796 flags.go:64] FLAG: --reserved-memory="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142738 4796 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142747 4796 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142757 4796 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142765 4796 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142775 4796 flags.go:64] FLAG: --runonce="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142783 4796 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142793 4796 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142802 4796 flags.go:64] FLAG: --seccomp-default="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142811 4796 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142820 4796 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142830 4796 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142869 4796 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142880 4796 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142889 4796 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142898 4796 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142906 4796 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142915 4796 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142925 4796 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142934 4796 flags.go:64] FLAG: --system-cgroups="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142942 4796 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142956 4796 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142966 4796 flags.go:64] FLAG: --tls-cert-file="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142974 4796 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.142986 4796 flags.go:64] FLAG: --tls-min-version="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.143007 4796 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.143018 4796 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.143028 4796 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.143038 4796 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.143048 4796 flags.go:64] FLAG: --v="2" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.143061 4796 flags.go:64] FLAG: --version="false" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.143072 4796 flags.go:64] FLAG: --vmodule="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.143082 4796 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.143092 4796 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143301 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143312 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143320 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143331 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143340 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143350 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143361 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143370 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143379 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143386 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143394 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143402 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143410 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143417 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143427 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143437 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143446 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143454 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143462 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143470 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143478 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143486 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143494 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143501 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143509 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143516 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143524 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143544 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143552 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143559 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143567 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143602 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143611 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143619 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143626 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143636 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143646 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143656 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143664 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143675 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143683 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143691 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143700 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143709 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143717 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143726 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143734 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143742 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143750 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143757 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143765 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143774 4796 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143781 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143789 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143797 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143804 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143812 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143819 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143827 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143837 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143845 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143852 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143859 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143869 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143876 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143885 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143892 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143904 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143912 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143920 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.143929 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.144934 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.161831 4796 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.162215 4796 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162817 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162881 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162894 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162907 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162920 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162930 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162939 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162947 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162955 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162963 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162971 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162979 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162987 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.162995 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163002 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163010 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163017 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163025 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163033 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163041 4796 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163048 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163056 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163064 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163071 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163079 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163087 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163095 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163105 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163114 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163124 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163132 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163141 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163148 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163156 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163164 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163177 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163186 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163195 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163204 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163212 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163220 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163228 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163238 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163247 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163255 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163263 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163271 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163279 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163286 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163294 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163303 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163310 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163318 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163325 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163333 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163341 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163350 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163358 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163366 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163374 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163381 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163389 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163397 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163405 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163413 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163421 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163429 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163436 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163444 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163452 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163459 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.163472 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163738 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163753 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163761 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163771 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163780 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163788 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163797 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163805 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163813 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163820 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163828 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163836 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163844 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163851 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163859 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163867 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163875 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163883 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163891 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163899 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163906 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163914 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163922 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163929 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163937 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163945 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163952 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163959 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163967 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163975 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163983 4796 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163991 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.163998 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164007 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164017 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164028 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164037 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164047 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164055 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164063 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164072 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164080 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164087 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164095 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164103 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164111 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164118 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164129 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164137 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164146 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164154 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164162 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164169 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164179 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164189 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164200 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164211 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164220 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164230 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164239 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164247 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164255 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164263 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164272 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164280 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164289 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164297 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164306 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164314 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164321 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.164329 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.164341 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.165413 4796 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.171302 4796 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.171435 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.174187 4796 server.go:997] "Starting client certificate rotation" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.174223 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.174452 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-12 07:47:23.329597506 +0000 UTC Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.174564 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1145h22m51.155038989s for next certificate rotation Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.199482 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.202325 4796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.221539 4796 log.go:25] "Validated CRI v1 runtime API" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.263073 4796 log.go:25] "Validated CRI v1 image API" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.265703 4796 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.272414 4796 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-14-19-52-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.272468 4796 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.301840 4796 manager.go:217] Machine: {Timestamp:2025-11-25 14:24:32.299511291 +0000 UTC m=+0.642620755 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:666c950e-4620-4706-912b-93ef77d5c70a BootID:416e3262-df66-4d84-86f4-b2212e7ea3f7 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5d:10:9b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5d:10:9b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d5:8c:70 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:22:fe:96 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:40:bf:a4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2e:ec:9d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:06:60:9a:af:77:04 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:aa:4f:b0:64:34:86 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.302235 4796 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.302446 4796 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.307671 4796 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.308140 4796 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.308214 4796 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.308602 4796 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.308625 4796 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.309197 4796 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.309251 4796 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.309728 4796 state_mem.go:36] "Initialized new in-memory state store" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.309899 4796 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.312768 4796 kubelet.go:418] "Attempting to sync node with API server" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.312805 4796 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.312843 4796 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.312867 4796 kubelet.go:324] "Adding apiserver pod source" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.312887 4796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.316853 4796 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.318051 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.320729 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.320847 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.320835 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.320917 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.321807 4796 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.323874 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.324052 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.324192 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.324328 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.324450 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.324604 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.324740 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.324866 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.324997 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.325120 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.325246 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.325357 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.326668 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.328507 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.328787 4796 server.go:1280] "Started kubelet" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.329067 4796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.329179 4796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.330017 4796 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 14:24:32 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.334041 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.334313 4796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.335020 4796 server.go:460] "Adding debug handlers to kubelet server" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.334323 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:13:11.001636712 +0000 UTC Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.335466 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.335490 4796 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.335478 4796 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.335533 4796 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.337684 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.337797 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.337654 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="200ms" Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.337689 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b4606654971ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 14:24:32.328741357 +0000 UTC m=+0.671850821,LastTimestamp:2025-11-25 14:24:32.328741357 +0000 UTC m=+0.671850821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.344880 4796 factory.go:153] Registering CRI-O factory Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.345057 4796 factory.go:221] Registration of the crio container factory successfully Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.345193 4796 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.345211 4796 factory.go:55] Registering systemd factory Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.345287 4796 factory.go:221] Registration of the systemd container factory successfully Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.345323 4796 factory.go:103] Registering Raw factory Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.345347 4796 manager.go:1196] Started watching for new ooms in manager Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.351109 4796 manager.go:319] Starting recovery of all containers Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357352 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357440 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357471 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357497 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357524 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357551 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357615 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357645 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357676 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357706 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357778 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.357809 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358550 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358630 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358670 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358702 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358733 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358827 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358860 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358887 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358952 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.358997 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359027 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359055 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359082 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359113 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359186 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359223 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359252 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359280 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359361 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359389 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359416 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359446 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359477 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359504 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359532 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359644 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359670 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359698 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359725 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359752 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359780 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359807 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359834 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359862 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359906 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359947 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.359975 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360004 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360032 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360062 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360100 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360130 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360158 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360186 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360215 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360241 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360270 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360300 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360324 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360352 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360386 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360413 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360438 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360464 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360541 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360601 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360632 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360659 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360686 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360712 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360741 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360771 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360797 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360823 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360847 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360875 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360905 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360931 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360959 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.360987 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361015 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361045 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361072 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361114 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361139 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361166 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361189 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361217 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361244 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361269 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361294 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361367 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361401 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361425 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361486 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.361512 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.364202 4796 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.364355 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.364540 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.364656 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.364720 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.364745 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.364768 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.364803 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.364966 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.365072 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.365326 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.366045 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.366293 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.366433 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.366610 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.366778 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.366906 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.367067 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.367200 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.367318 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.367433 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.367552 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.367703 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.367834 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.367959 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.368074 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.368195 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.368412 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.368623 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.368783 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.368935 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.369104 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.369260 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.369431 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.369656 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.369823 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.370031 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.370210 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.370401 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.370609 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.370800 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.371039 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.371951 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372008 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372031 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372052 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372072 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372091 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372112 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372130 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372148 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372166 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372186 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372204 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372223 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372241 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372260 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372280 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372299 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372317 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372338 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372357 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372376 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372394 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372413 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372432 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372451 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372471 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372489 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372508 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372527 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372548 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372570 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372617 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372650 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372669 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372689 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372710 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372729 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372748 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372766 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372786 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372806 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372823 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372845 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372864 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372882 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372901 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372920 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372949 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372969 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.372989 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373009 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373030 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373047 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373066 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373085 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373103 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373123 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373143 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373162 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373188 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373206 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373224 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373241 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373259 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373279 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373299 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373318 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373337 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373355 4796 reconstruct.go:97] "Volume reconstruction finished" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.373369 4796 reconciler.go:26] "Reconciler: start to sync state" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.385842 4796 manager.go:324] Recovery completed Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.402899 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.403762 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.404670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.404718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.404736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.407958 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.408024 4796 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.408073 4796 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.408148 4796 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.408746 4796 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.408776 4796 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.408805 4796 state_mem.go:36] "Initialized new in-memory state store" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.411542 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.411713 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.425849 4796 policy_none.go:49] "None policy: Start" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.426653 4796 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.426696 4796 state_mem.go:35] "Initializing new in-memory state store" Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.435783 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.495239 4796 manager.go:334] "Starting Device Plugin manager" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.496114 4796 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.496151 4796 server.go:79] "Starting device plugin registration server" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.496840 4796 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.496877 4796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.499097 4796 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.499768 4796 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.499803 4796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.508339 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.508488 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.510105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.510178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.510206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.510491 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.510873 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.510938 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.511057 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.512055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.512107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.512127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.512314 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.512466 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.512552 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.512721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.512787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.512826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.513464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.513510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.513536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.513785 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.514002 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.514068 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.514328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.514421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.514441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.515105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.515166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.515196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.515230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.515259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.515276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.515429 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.515648 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.515720 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.516773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.516820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.516838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.516994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.517026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.517044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.517271 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.517321 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.518421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.518471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.518501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.539211 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="400ms" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.576052 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.576298 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.576467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.576650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.576799 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.576960 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.577102 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.577242 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.577382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.577540 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.577756 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.577904 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.578044 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.578181 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.578317 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.597130 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.598431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.598480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.598501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.598533 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.599216 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680086 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680195 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680259 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680308 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680336 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680358 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680385 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680450 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680387 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680472 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680451 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680338 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680527 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680545 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680586 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680426 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680622 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680677 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680719 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680761 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680862 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680804 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680872 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.680820 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.800316 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.802015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.802085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.802110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.802152 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.802796 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.867698 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.893896 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.911517 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.917249 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7a43f6ff4d2c928b21388625ad199e128baf3c53ab8800d010be05cf76dca0bb WatchSource:0}: Error finding container 7a43f6ff4d2c928b21388625ad199e128baf3c53ab8800d010be05cf76dca0bb: Status 404 returned error can't find the container with id 7a43f6ff4d2c928b21388625ad199e128baf3c53ab8800d010be05cf76dca0bb Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.922281 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.934352 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e432276ab7a81834c4475de6847873f6ba6bdcd154f600aa7868d925f0bed51b WatchSource:0}: Error finding container e432276ab7a81834c4475de6847873f6ba6bdcd154f600aa7868d925f0bed51b: Status 404 returned error can't find the container with id e432276ab7a81834c4475de6847873f6ba6bdcd154f600aa7868d925f0bed51b Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.938161 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-b7a1ffa7cf162f23e1400d62f229eb4f1d4075f76b37bf34d197bde6c223d0a6 WatchSource:0}: Error finding container b7a1ffa7cf162f23e1400d62f229eb4f1d4075f76b37bf34d197bde6c223d0a6: Status 404 returned error can't find the container with id b7a1ffa7cf162f23e1400d62f229eb4f1d4075f76b37bf34d197bde6c223d0a6 Nov 25 14:24:32 crc kubenswrapper[4796]: E1125 14:24:32.940195 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="800ms" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.945694 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-971907aafa9109de23d1ccb7635d56b0268b2f3c78788a2d2199c058e36ab1d3 WatchSource:0}: Error finding container 971907aafa9109de23d1ccb7635d56b0268b2f3c78788a2d2199c058e36ab1d3: Status 404 returned error can't find the container with id 971907aafa9109de23d1ccb7635d56b0268b2f3c78788a2d2199c058e36ab1d3 Nov 25 14:24:32 crc kubenswrapper[4796]: I1125 14:24:32.948492 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:24:32 crc kubenswrapper[4796]: W1125 14:24:32.980912 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-48f7a1bb3b7244e6b6273ad1847b49a503fe7277807b8afb837592784f7efbee WatchSource:0}: Error finding container 48f7a1bb3b7244e6b6273ad1847b49a503fe7277807b8afb837592784f7efbee: Status 404 returned error can't find the container with id 48f7a1bb3b7244e6b6273ad1847b49a503fe7277807b8afb837592784f7efbee Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.203374 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.205161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.205222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.205245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.205286 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:24:33 crc kubenswrapper[4796]: E1125 14:24:33.206698 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Nov 25 14:24:33 crc kubenswrapper[4796]: W1125 14:24:33.273675 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:33 crc kubenswrapper[4796]: E1125 14:24:33.273792 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:33 crc kubenswrapper[4796]: W1125 14:24:33.318999 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:33 crc kubenswrapper[4796]: E1125 14:24:33.319109 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.330458 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.336640 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:33:19.843804225 +0000 UTC Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.336714 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 100h8m46.507095442s for next certificate rotation Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.412763 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7a43f6ff4d2c928b21388625ad199e128baf3c53ab8800d010be05cf76dca0bb"} Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.414814 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"48f7a1bb3b7244e6b6273ad1847b49a503fe7277807b8afb837592784f7efbee"} Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.416758 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"971907aafa9109de23d1ccb7635d56b0268b2f3c78788a2d2199c058e36ab1d3"} Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.418672 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b7a1ffa7cf162f23e1400d62f229eb4f1d4075f76b37bf34d197bde6c223d0a6"} Nov 25 14:24:33 crc kubenswrapper[4796]: I1125 14:24:33.420455 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e432276ab7a81834c4475de6847873f6ba6bdcd154f600aa7868d925f0bed51b"} Nov 25 14:24:33 crc kubenswrapper[4796]: W1125 14:24:33.681181 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:33 crc kubenswrapper[4796]: E1125 14:24:33.681297 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:33 crc kubenswrapper[4796]: E1125 14:24:33.741903 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="1.6s" Nov 25 14:24:33 crc kubenswrapper[4796]: W1125 14:24:33.918508 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:33 crc kubenswrapper[4796]: E1125 14:24:33.918657 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.007555 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.009626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.009689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.009707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.009762 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:24:34 crc kubenswrapper[4796]: E1125 14:24:34.010399 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.329553 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.425284 4796 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a" exitCode=0 Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.425419 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.425440 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a"} Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.426668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.426735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.426760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.428377 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c"} Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.428440 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a"} Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.428466 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4"} Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.432030 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808" exitCode=0 Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.432123 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808"} Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.432165 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.433535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.433613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.433632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.435461 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7a2f85887490e219b08eaf784274bb0ea37f1255dd94aaf7a9de02323b4dbd57" exitCode=0 Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.435540 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7a2f85887490e219b08eaf784274bb0ea37f1255dd94aaf7a9de02323b4dbd57"} Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.435673 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.435691 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.437074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.437109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.437127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.437201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.437243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.437264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.438104 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="812463b79a0c55901cf740dd35df1619a443aa0dc66509d7553cd660311f58c8" exitCode=0 Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.438150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"812463b79a0c55901cf740dd35df1619a443aa0dc66509d7553cd660311f58c8"} Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.438219 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.439521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.439558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:34 crc kubenswrapper[4796]: I1125 14:24:34.439617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:35 crc kubenswrapper[4796]: W1125 14:24:35.276855 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:35 crc kubenswrapper[4796]: E1125 14:24:35.276975 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:35 crc kubenswrapper[4796]: W1125 14:24:35.298893 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:35 crc kubenswrapper[4796]: E1125 14:24:35.298975 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.329707 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:35 crc kubenswrapper[4796]: W1125 14:24:35.341896 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:35 crc kubenswrapper[4796]: E1125 14:24:35.341988 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:35 crc kubenswrapper[4796]: E1125 14:24:35.343278 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="3.2s" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.443613 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b"} Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.443670 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8"} Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.446871 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca"} Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.446928 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.448436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.448466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.448477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.449155 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f"} Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.449194 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885"} Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.451169 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7bae3d13ead9fc2cc11bbf53dcbb3f3220263b80ff4ac95efa798fab7abec807" exitCode=0 Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.451213 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7bae3d13ead9fc2cc11bbf53dcbb3f3220263b80ff4ac95efa798fab7abec807"} Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.451291 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.452141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.452170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.452181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.452961 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"77bb71e0e54f1775d15613a46d20108d60646c9ca9c4c8518ad3aeba4e8f2d85"} Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.453015 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.453651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.453671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.453681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.610670 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.613920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.613971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.613988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:35 crc kubenswrapper[4796]: I1125 14:24:35.614019 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:24:35 crc kubenswrapper[4796]: E1125 14:24:35.614546 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.227:6443: connect: connection refused" node="crc" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.329349 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:36 crc kubenswrapper[4796]: E1125 14:24:36.420723 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b4606654971ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 14:24:32.328741357 +0000 UTC m=+0.671850821,LastTimestamp:2025-11-25 14:24:32.328741357 +0000 UTC m=+0.671850821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.457780 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc"} Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.457825 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.458590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.458632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.458642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.461025 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e"} Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.461066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d"} Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.461076 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f"} Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.461201 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.463549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.463588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.463599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.465941 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="718c72fdbee0144461c5320276c085390e205b458559ad024b78f85da4c5eb02" exitCode=0 Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.466001 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"718c72fdbee0144461c5320276c085390e205b458559ad024b78f85da4c5eb02"} Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.466025 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.466100 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.466135 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.467354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.467385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.467395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.467358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.467441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.467464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.467722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.467809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.467867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:36 crc kubenswrapper[4796]: W1125 14:24:36.918031 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.227:6443: connect: connection refused Nov 25 14:24:36 crc kubenswrapper[4796]: E1125 14:24:36.918137 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.227:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:24:36 crc kubenswrapper[4796]: I1125 14:24:36.955127 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.470680 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.474106 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e" exitCode=255 Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.474168 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e"} Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.474233 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.475411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.475447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.475461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.475976 4796 scope.go:117] "RemoveContainer" containerID="483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.479151 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0ea128b20ef56a4bcf9acd90137f837696bb45de74bdde5c1b2ba9f9ef0aff70"} Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.479208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1f5ab1b5003e3e8bc0b469250a54299bb9af98dd3402106bdc17b47d1ffcc277"} Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.479219 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.479227 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"497387d82f92fba9170e7d28717885b344c103696d852ffc88a0d3b30e230ed1"} Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.479257 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.479965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.480004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:37 crc kubenswrapper[4796]: I1125 14:24:37.480020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.015463 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.025746 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.486802 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"983aae4e4dff7dce80ede7ac68a0d3a0f543c9b98500f370d9e3ce580a7e8248"} Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.486864 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5af9d7340a691bf6705cc8874472734232129d55a3a3512f58c454867d8e22c"} Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.486982 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.488116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.488167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.488185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.489730 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.492206 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1"} Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.492295 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.492347 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.493623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.493692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.493706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.493720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.493760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.493875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.627310 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.815324 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.816967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.817071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.817090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:38 crc kubenswrapper[4796]: I1125 14:24:38.817125 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.495517 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.495602 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.495758 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.497208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.497293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.497337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.497361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.497298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.497420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.663569 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:39 crc kubenswrapper[4796]: I1125 14:24:39.769534 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 14:24:40 crc kubenswrapper[4796]: I1125 14:24:40.497705 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:40 crc kubenswrapper[4796]: I1125 14:24:40.497755 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:40 crc kubenswrapper[4796]: I1125 14:24:40.498930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:40 crc kubenswrapper[4796]: I1125 14:24:40.498999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:40 crc kubenswrapper[4796]: I1125 14:24:40.498934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:40 crc kubenswrapper[4796]: I1125 14:24:40.499023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:40 crc kubenswrapper[4796]: I1125 14:24:40.499060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:40 crc kubenswrapper[4796]: I1125 14:24:40.499084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:40 crc kubenswrapper[4796]: I1125 14:24:40.536094 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.500312 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.500313 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.501308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.501378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.501399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.501561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.501631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.501651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.539857 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.540041 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.541189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.541224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.541240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.756892 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:41 crc kubenswrapper[4796]: I1125 14:24:41.764629 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:42 crc kubenswrapper[4796]: I1125 14:24:42.503142 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:42 crc kubenswrapper[4796]: I1125 14:24:42.503274 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:42 crc kubenswrapper[4796]: I1125 14:24:42.504772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:42 crc kubenswrapper[4796]: I1125 14:24:42.504832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:42 crc kubenswrapper[4796]: I1125 14:24:42.504858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:42 crc kubenswrapper[4796]: E1125 14:24:42.511679 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 14:24:43 crc kubenswrapper[4796]: I1125 14:24:43.505525 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:43 crc kubenswrapper[4796]: I1125 14:24:43.507428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:43 crc kubenswrapper[4796]: I1125 14:24:43.507470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:43 crc kubenswrapper[4796]: I1125 14:24:43.507481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:43 crc kubenswrapper[4796]: I1125 14:24:43.512122 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:44 crc kubenswrapper[4796]: I1125 14:24:44.508195 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:44 crc kubenswrapper[4796]: I1125 14:24:44.510001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:44 crc kubenswrapper[4796]: I1125 14:24:44.510183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:44 crc kubenswrapper[4796]: I1125 14:24:44.510317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:44 crc kubenswrapper[4796]: I1125 14:24:44.540494 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 14:24:44 crc kubenswrapper[4796]: I1125 14:24:44.540743 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 14:24:44 crc kubenswrapper[4796]: I1125 14:24:44.631087 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:45 crc kubenswrapper[4796]: I1125 14:24:45.510516 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:45 crc kubenswrapper[4796]: I1125 14:24:45.511968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:45 crc kubenswrapper[4796]: I1125 14:24:45.512024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:45 crc kubenswrapper[4796]: I1125 14:24:45.512042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:47 crc kubenswrapper[4796]: I1125 14:24:47.331178 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 25 14:24:47 crc kubenswrapper[4796]: I1125 14:24:47.901069 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 14:24:47 crc kubenswrapper[4796]: I1125 14:24:47.901118 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 14:24:47 crc kubenswrapper[4796]: I1125 14:24:47.907793 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 14:24:47 crc kubenswrapper[4796]: I1125 14:24:47.907960 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 14:24:48 crc kubenswrapper[4796]: I1125 14:24:48.027091 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 14:24:48 crc kubenswrapper[4796]: I1125 14:24:48.027153 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 14:24:48 crc kubenswrapper[4796]: I1125 14:24:48.634919 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]log ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]etcd ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/priority-and-fairness-filter ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-apiextensions-informers ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-apiextensions-controllers ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/crd-informer-synced ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-system-namespaces-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 25 14:24:48 crc kubenswrapper[4796]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 25 14:24:48 crc kubenswrapper[4796]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/bootstrap-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/start-kube-aggregator-informers ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/apiservice-registration-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/apiservice-discovery-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]autoregister-completion ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/apiservice-openapi-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 25 14:24:48 crc kubenswrapper[4796]: livez check failed Nov 25 14:24:48 crc kubenswrapper[4796]: I1125 14:24:48.635021 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:24:49 crc kubenswrapper[4796]: I1125 14:24:49.808689 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 14:24:49 crc kubenswrapper[4796]: I1125 14:24:49.808910 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:49 crc kubenswrapper[4796]: I1125 14:24:49.810277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:49 crc kubenswrapper[4796]: I1125 14:24:49.810355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:49 crc kubenswrapper[4796]: I1125 14:24:49.810385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:49 crc kubenswrapper[4796]: I1125 14:24:49.828311 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 14:24:50 crc kubenswrapper[4796]: I1125 14:24:50.523994 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:50 crc kubenswrapper[4796]: I1125 14:24:50.525288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:50 crc kubenswrapper[4796]: I1125 14:24:50.525337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:50 crc kubenswrapper[4796]: I1125 14:24:50.525353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:52 crc kubenswrapper[4796]: E1125 14:24:52.511854 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 14:24:52 crc kubenswrapper[4796]: E1125 14:24:52.897886 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 25 14:24:52 crc kubenswrapper[4796]: I1125 14:24:52.900919 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 14:24:52 crc kubenswrapper[4796]: E1125 14:24:52.902955 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 14:24:52 crc kubenswrapper[4796]: I1125 14:24:52.909848 4796 trace.go:236] Trace[1573007322]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:24:41.170) (total time: 11738ms): Nov 25 14:24:52 crc kubenswrapper[4796]: Trace[1573007322]: ---"Objects listed" error: 11738ms (14:24:52.909) Nov 25 14:24:52 crc kubenswrapper[4796]: Trace[1573007322]: [11.738909175s] [11.738909175s] END Nov 25 14:24:52 crc kubenswrapper[4796]: I1125 14:24:52.910034 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 14:24:52 crc kubenswrapper[4796]: I1125 14:24:52.911441 4796 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 14:24:52 crc kubenswrapper[4796]: I1125 14:24:52.911482 4796 trace.go:236] Trace[970191878]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:24:40.239) (total time: 12672ms): Nov 25 14:24:52 crc kubenswrapper[4796]: Trace[970191878]: ---"Objects listed" error: 12672ms (14:24:52.911) Nov 25 14:24:52 crc kubenswrapper[4796]: Trace[970191878]: [12.672115109s] [12.672115109s] END Nov 25 14:24:52 crc kubenswrapper[4796]: I1125 14:24:52.911718 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 14:24:52 crc kubenswrapper[4796]: I1125 14:24:52.927201 4796 trace.go:236] Trace[954710845]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:24:40.509) (total time: 12417ms): Nov 25 14:24:52 crc kubenswrapper[4796]: Trace[954710845]: ---"Objects listed" error: 12417ms (14:24:52.926) Nov 25 14:24:52 crc kubenswrapper[4796]: Trace[954710845]: [12.417864226s] [12.417864226s] END Nov 25 14:24:52 crc kubenswrapper[4796]: I1125 14:24:52.927243 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.325505 4796 apiserver.go:52] "Watching apiserver" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.329079 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.329388 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.329842 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.329922 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.330056 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.330352 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.330447 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.330619 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.330738 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.330797 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.330870 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.335736 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.336020 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.336235 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.336407 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.336610 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.336730 4796 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.336788 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.338442 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.339006 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.339036 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.344206 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.351847 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.372727 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.378269 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.391483 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.402485 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.413705 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.414879 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.414948 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.414986 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415017 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415052 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415086 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415120 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415151 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415185 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415218 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415251 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415283 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415318 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415351 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415384 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415419 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415458 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415494 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415543 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415601 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.415655 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416066 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416120 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416157 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416193 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416259 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416291 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416334 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416407 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416447 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416504 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416540 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416597 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416640 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416676 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416745 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416780 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416812 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416852 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416889 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416927 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.416980 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.417013 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.417053 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.417100 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432109 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432159 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432183 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432202 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432221 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432245 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432265 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432284 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432306 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432324 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432345 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432367 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432388 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432409 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432429 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432450 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432469 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432489 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432509 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432528 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432546 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432581 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432653 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432678 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432698 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432719 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432738 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432759 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432779 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432800 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432832 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432850 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432869 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432889 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432907 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432925 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432946 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432964 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432997 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433022 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433047 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433095 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433118 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433140 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433165 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433187 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433209 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433232 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433256 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433285 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433308 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433332 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433358 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433498 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433524 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433550 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433593 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433620 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433695 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433722 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433745 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433766 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433788 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433811 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433831 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433854 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433875 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433896 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433917 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433938 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433963 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433987 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434014 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434064 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434089 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434115 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434141 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434162 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434184 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434207 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434230 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434255 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434282 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434308 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434333 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434357 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434384 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434411 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434442 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434465 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434489 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434511 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432132 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432189 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432216 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432287 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432354 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432380 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432387 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432412 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432476 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432502 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432566 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432686 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432689 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432774 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432848 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.432979 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433175 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433174 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434816 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433202 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433268 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433326 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433374 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433444 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433512 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433599 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433596 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433554 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433610 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433681 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433932 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433988 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.433987 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434000 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434231 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434282 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434326 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434367 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434438 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434509 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.434540 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:24:53.934519081 +0000 UTC m=+22.277628515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435134 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435187 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435231 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435268 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435295 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435309 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435301 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435127 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435354 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435378 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435416 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435420 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435446 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435477 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435502 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435527 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435552 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435610 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435637 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435662 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435685 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435708 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435726 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435756 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435844 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.436174 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.436234 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.436239 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434561 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434740 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434820 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434833 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.436357 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.436392 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.434849 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435033 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.436479 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.436626 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.436638 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.436664 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.437072 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.438408 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.438619 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.438655 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.438738 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.439551 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.439857 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.440117 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.440271 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.440390 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.440989 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.441007 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.441022 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.441328 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.441651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.441710 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.441944 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.442239 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.442351 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.442517 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.442979 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.442992 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.443262 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.443507 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.443809 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.443980 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.444168 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.444174 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.444624 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.445007 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.445030 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.445182 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.445232 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.445420 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.445883 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.445988 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.446040 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.446046 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.446368 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.446369 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.446454 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.446426 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.446812 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.447121 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.447610 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.447978 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.448248 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.448416 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.448488 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.448874 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.448978 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.449102 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.449243 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.449284 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.449523 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.449639 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.449988 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450127 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.435732 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450182 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450202 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450232 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450254 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450275 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450295 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450317 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450342 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450367 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450382 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450411 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450440 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450462 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451071 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451102 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451130 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451156 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451181 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451205 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451225 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451275 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451300 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451318 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451335 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451351 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451369 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451403 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451433 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451455 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451494 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451517 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451537 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451593 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451623 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451646 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451668 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451703 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451756 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451780 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451856 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451880 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451927 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451972 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452008 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452040 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452079 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452130 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452157 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452184 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452210 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452235 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452367 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452384 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452396 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452408 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452419 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452430 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452441 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452453 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452464 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452475 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452502 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452515 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452527 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452540 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452555 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452566 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452606 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452619 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452631 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452643 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452656 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452682 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452695 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452707 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452720 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452731 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452742 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452756 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452769 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452782 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452793 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452805 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452818 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452832 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452844 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452853 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452862 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452871 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452879 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452888 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452897 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452905 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452915 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452924 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452933 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452955 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452966 4796 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452977 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452985 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452993 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453002 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453011 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453019 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453028 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453037 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453046 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453054 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453064 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453073 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453081 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453090 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453099 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453108 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453117 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453125 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453138 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453146 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453156 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453164 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453172 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453182 4796 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453204 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453215 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453225 4796 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453234 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453242 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453250 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453259 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453267 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453275 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453284 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453292 4796 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453300 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453309 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453317 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453326 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453334 4796 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453342 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453351 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453360 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453369 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453378 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453388 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453397 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453419 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453430 4796 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453450 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453461 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453473 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453483 4796 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453492 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453502 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453512 4796 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453521 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453530 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453538 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453551 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453560 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453603 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453616 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453628 4796 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453638 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453646 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453654 4796 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453663 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453672 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453681 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453694 4796 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453702 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453710 4796 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453718 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453727 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453735 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450529 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450591 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450633 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.450705 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.454538 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451028 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451051 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451350 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451538 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451834 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.451910 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452104 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452152 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452422 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452455 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452620 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452769 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.452887 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453103 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453351 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453557 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.453651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.454022 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.454707 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.454112 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.454132 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.454430 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.454861 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.454938 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.455274 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.455542 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.455876 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.455917 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.456015 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.456052 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.456082 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:53.956063798 +0000 UTC m=+22.299173232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.456334 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.456361 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.456484 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.457230 4796 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.457706 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.458758 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.459559 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.459719 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.459776 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:53.959758127 +0000 UTC m=+22.302867631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.460716 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.461271 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.463226 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.463433 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.463752 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.463927 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.464700 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.466466 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.468615 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.469124 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.469371 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.469600 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.472186 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.472952 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.480871 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.482860 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.483026 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.483342 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.483378 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.483397 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.483443 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.485189 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.485222 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.485247 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.486207 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.486602 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.486740 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.483477 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:53.983452072 +0000 UTC m=+22.326561606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.487827 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:53.987808651 +0000 UTC m=+22.330918075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.488079 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.488847 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.488981 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.490094 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.491075 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.491181 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.491229 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.491539 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.492506 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.492717 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.493125 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.494419 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.496020 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.498598 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.498965 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.499063 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.499380 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.500286 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.500695 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.506320 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.523178 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.529408 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.547668 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.554852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.554885 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.554955 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.554965 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.554974 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.554983 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.554983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.554992 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555034 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555044 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555054 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555063 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555071 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555079 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555088 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555263 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555086 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555272 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555350 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555364 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555377 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555389 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555399 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555411 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555424 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555436 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555446 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555457 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555468 4796 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555479 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555489 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555500 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555511 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555522 4796 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555535 4796 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555546 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555557 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555587 4796 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555599 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555610 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555621 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555632 4796 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555643 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555654 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555666 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555676 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555687 4796 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555699 4796 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555710 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555720 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555746 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555757 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555768 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555780 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555791 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555803 4796 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555816 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555826 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555837 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555848 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555858 4796 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555869 4796 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555880 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555892 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555903 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555913 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555924 4796 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555935 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555948 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555959 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555970 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555980 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.555991 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.556002 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.556027 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.636988 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.637566 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.637667 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.641199 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.649148 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.651273 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.651426 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.658336 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.664566 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.664756 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.678559 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.692516 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.706750 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.720206 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.737328 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.748908 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.763419 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.778306 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.788362 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.798133 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.813965 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:36Z\\\",\\\"message\\\":\\\"W1125 14:24:36.340964 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 14:24:36.341512 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764080676 cert, and key in /tmp/serving-cert-687248550/serving-signer.crt, /tmp/serving-cert-687248550/serving-signer.key\\\\nI1125 14:24:36.685623 1 observer_polling.go:159] Starting file observer\\\\nW1125 14:24:36.687690 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:24:36.687799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:36.691232 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-687248550/tls.crt::/tmp/serving-cert-687248550/tls.key\\\\\\\"\\\\nF1125 14:24:36.903776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.824928 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.840724 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.958944 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:53 crc kubenswrapper[4796]: I1125 14:24:53.959223 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.959463 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.959612 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:54.959594428 +0000 UTC m=+23.302703862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:24:53 crc kubenswrapper[4796]: E1125 14:24:53.959773 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:24:54.959761814 +0000 UTC m=+23.302871248 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.060279 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.060334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.060372 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060513 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060534 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060511 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060600 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060539 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060676 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:55.060658554 +0000 UTC m=+23.403767978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060561 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060747 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:55.060731436 +0000 UTC m=+23.403840870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060612 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.060860 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:55.06083806 +0000 UTC m=+23.403947484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.422062 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.423770 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.426775 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.428362 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.430620 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.431680 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.433368 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.435772 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.436765 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.438272 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.439010 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.439976 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.441353 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.442132 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.443456 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.444418 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.445886 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.446944 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.448546 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.450485 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.451725 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.452675 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.457012 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.458088 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.460159 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.461186 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.462634 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.463301 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.464174 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.465452 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.466240 4796 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.466419 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.470081 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.470781 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.471324 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.474142 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.475089 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.475917 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.477465 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.478948 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.479639 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.480490 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.482286 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.483873 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.484514 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.485913 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.487363 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.489064 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.489729 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.492831 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.493528 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.494331 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.495958 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.496620 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.539818 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8cd6b5010d8480313f2c36fdc91aedd75c2d5007aa04c5c638e9fb6d2a3deb13"} Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.541006 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5"} Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.541066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"83ee6b65566304bae70cc591846c8d8615d280e59e76b2f37a4cdebb262bea73"} Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.542207 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa"} Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.542248 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9d193f14bdb8a5628491c2bfa1ce7ef2747fd50e0d697dd57838381b8a50ff3c"} Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.544044 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.544505 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.546109 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1" exitCode=255 Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.546195 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1"} Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.546264 4796 scope.go:117] "RemoveContainer" containerID="483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.546286 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nz5r2"] Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.547042 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nz5r2" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.561454 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.561666 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.561785 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.569121 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.569442 4796 scope.go:117] "RemoveContainer" containerID="f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1" Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.569709 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.583005 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:36Z\\\",\\\"message\\\":\\\"W1125 14:24:36.340964 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 14:24:36.341512 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764080676 cert, and key in /tmp/serving-cert-687248550/serving-signer.crt, /tmp/serving-cert-687248550/serving-signer.key\\\\nI1125 14:24:36.685623 1 observer_polling.go:159] Starting file observer\\\\nW1125 14:24:36.687690 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:24:36.687799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:36.691232 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-687248550/tls.crt::/tmp/serving-cert-687248550/tls.key\\\\\\\"\\\\nF1125 14:24:36.903776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.603513 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.618121 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.633716 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.648893 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.660250 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.665704 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ea9ffd59-232e-4973-8470-910389b782ad-hosts-file\") pod \"node-resolver-nz5r2\" (UID: \"ea9ffd59-232e-4973-8470-910389b782ad\") " pod="openshift-dns/node-resolver-nz5r2" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.665789 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntq5c\" (UniqueName: \"kubernetes.io/projected/ea9ffd59-232e-4973-8470-910389b782ad-kube-api-access-ntq5c\") pod \"node-resolver-nz5r2\" (UID: \"ea9ffd59-232e-4973-8470-910389b782ad\") " pod="openshift-dns/node-resolver-nz5r2" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.674476 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.694178 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.713175 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.724025 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.732776 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.744736 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.758368 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:36Z\\\",\\\"message\\\":\\\"W1125 14:24:36.340964 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 14:24:36.341512 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764080676 cert, and key in /tmp/serving-cert-687248550/serving-signer.crt, /tmp/serving-cert-687248550/serving-signer.key\\\\nI1125 14:24:36.685623 1 observer_polling.go:159] Starting file observer\\\\nW1125 14:24:36.687690 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:24:36.687799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:36.691232 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-687248550/tls.crt::/tmp/serving-cert-687248550/tls.key\\\\\\\"\\\\nF1125 14:24:36.903776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.766609 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntq5c\" (UniqueName: \"kubernetes.io/projected/ea9ffd59-232e-4973-8470-910389b782ad-kube-api-access-ntq5c\") pod \"node-resolver-nz5r2\" (UID: \"ea9ffd59-232e-4973-8470-910389b782ad\") " pod="openshift-dns/node-resolver-nz5r2" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.766662 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ea9ffd59-232e-4973-8470-910389b782ad-hosts-file\") pod \"node-resolver-nz5r2\" (UID: \"ea9ffd59-232e-4973-8470-910389b782ad\") " pod="openshift-dns/node-resolver-nz5r2" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.766760 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ea9ffd59-232e-4973-8470-910389b782ad-hosts-file\") pod \"node-resolver-nz5r2\" (UID: \"ea9ffd59-232e-4973-8470-910389b782ad\") " pod="openshift-dns/node-resolver-nz5r2" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.772827 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.782754 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.788795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntq5c\" (UniqueName: \"kubernetes.io/projected/ea9ffd59-232e-4973-8470-910389b782ad-kube-api-access-ntq5c\") pod \"node-resolver-nz5r2\" (UID: \"ea9ffd59-232e-4973-8470-910389b782ad\") " pod="openshift-dns/node-resolver-nz5r2" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.791322 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.797959 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.864490 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nz5r2" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.949666 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ch8mf"] Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.950299 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-h6xfl"] Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.950466 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ch8mf" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.950952 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-w88nx"] Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.951078 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.953745 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.954135 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.954297 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.954373 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.954467 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.954598 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.956819 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.959021 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.959147 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.959249 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.959309 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.959456 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.959942 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.968181 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.968265 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.968368 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.968422 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:56.968408585 +0000 UTC m=+25.311518009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:24:54 crc kubenswrapper[4796]: E1125 14:24:54.968499 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:24:56.968477027 +0000 UTC m=+25.311586451 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.972172 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:36Z\\\",\\\"message\\\":\\\"W1125 14:24:36.340964 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 14:24:36.341512 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764080676 cert, and key in /tmp/serving-cert-687248550/serving-signer.crt, /tmp/serving-cert-687248550/serving-signer.key\\\\nI1125 14:24:36.685623 1 observer_polling.go:159] Starting file observer\\\\nW1125 14:24:36.687690 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:24:36.687799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:36.691232 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-687248550/tls.crt::/tmp/serving-cert-687248550/tls.key\\\\\\\"\\\\nF1125 14:24:36.903776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.982401 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:54 crc kubenswrapper[4796]: I1125 14:24:54.995021 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.004161 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.012320 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.023374 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.035891 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.046337 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.056021 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.065733 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069040 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069091 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/93354b1f-76e7-4d82-999f-8093919ba0f7-cni-binary-copy\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069118 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069140 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq6fl\" (UniqueName: \"kubernetes.io/projected/93354b1f-76e7-4d82-999f-8093919ba0f7-kube-api-access-rq6fl\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069162 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-var-lib-cni-multus\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069208 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-cnibin\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069231 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-run-multus-certs\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069254 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c683b765-b1f2-49b1-b29d-6466cda73ca8-rootfs\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069226 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069288 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/93354b1f-76e7-4d82-999f-8093919ba0f7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069292 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-system-cni-dir\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069321 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069359 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069332 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:57.069314415 +0000 UTC m=+25.412423839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069432 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-cni-dir\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069462 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c683b765-b1f2-49b1-b29d-6466cda73ca8-proxy-tls\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069491 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-var-lib-cni-bin\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069525 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:57.069506092 +0000 UTC m=+25.412615526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069564 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069613 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-os-release\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069636 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-run-k8s-cni-cncf-io\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069660 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-cni-binary-copy\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069680 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-etc-kubernetes\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069700 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9q6\" (UniqueName: \"kubernetes.io/projected/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-kube-api-access-mz9q6\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069733 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-cnibin\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069777 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-daemon-config\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069799 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c683b765-b1f2-49b1-b29d-6466cda73ca8-mcd-auth-proxy-config\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069815 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069831 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069842 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069827 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-system-cni-dir\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.069884 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:24:57.069876653 +0000 UTC m=+25.412986067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069910 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-conf-dir\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069934 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-var-lib-kubelet\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069952 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5nv\" (UniqueName: \"kubernetes.io/projected/c683b765-b1f2-49b1-b29d-6466cda73ca8-kube-api-access-qc5nv\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069978 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-run-netns\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.069996 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-os-release\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.070012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-socket-dir-parent\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.070033 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-hostroot\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.077444 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.083357 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.091081 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.099484 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.106665 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.115182 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:36Z\\\",\\\"message\\\":\\\"W1125 14:24:36.340964 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 14:24:36.341512 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764080676 cert, and key in /tmp/serving-cert-687248550/serving-signer.crt, /tmp/serving-cert-687248550/serving-signer.key\\\\nI1125 14:24:36.685623 1 observer_polling.go:159] Starting file observer\\\\nW1125 14:24:36.687690 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:24:36.687799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:36.691232 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-687248550/tls.crt::/tmp/serving-cert-687248550/tls.key\\\\\\\"\\\\nF1125 14:24:36.903776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.122480 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.135201 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.145758 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.164669 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171296 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-cnibin\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-run-multus-certs\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171354 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c683b765-b1f2-49b1-b29d-6466cda73ca8-rootfs\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171372 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-cni-dir\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171378 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-cnibin\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171436 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-run-multus-certs\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171469 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c683b765-b1f2-49b1-b29d-6466cda73ca8-rootfs\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171388 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c683b765-b1f2-49b1-b29d-6466cda73ca8-proxy-tls\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171806 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/93354b1f-76e7-4d82-999f-8093919ba0f7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171890 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-system-cni-dir\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171921 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-var-lib-cni-bin\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171956 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171987 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-os-release\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172016 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-run-k8s-cni-cncf-io\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172048 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz9q6\" (UniqueName: \"kubernetes.io/projected/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-kube-api-access-mz9q6\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172081 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-cni-binary-copy\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-etc-kubernetes\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172145 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-system-cni-dir\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172166 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-cnibin\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172243 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-cnibin\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172260 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-daemon-config\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172307 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-var-lib-cni-bin\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172323 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-system-cni-dir\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172363 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-conf-dir\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c683b765-b1f2-49b1-b29d-6466cda73ca8-mcd-auth-proxy-config\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-var-lib-kubelet\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172452 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5nv\" (UniqueName: \"kubernetes.io/projected/c683b765-b1f2-49b1-b29d-6466cda73ca8-kube-api-access-qc5nv\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172477 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-run-netns\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172498 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-hostroot\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172541 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-os-release\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172564 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-socket-dir-parent\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172691 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/93354b1f-76e7-4d82-999f-8093919ba0f7-cni-binary-copy\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172716 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-os-release\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.171680 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-cni-dir\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172752 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-var-lib-cni-multus\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172718 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-var-lib-cni-multus\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-var-lib-kubelet\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172833 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq6fl\" (UniqueName: \"kubernetes.io/projected/93354b1f-76e7-4d82-999f-8093919ba0f7-kube-api-access-rq6fl\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173004 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-run-k8s-cni-cncf-io\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.172989 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173056 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-hostroot\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173203 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-host-run-netns\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173190 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173238 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-etc-kubernetes\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173346 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-os-release\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173407 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/93354b1f-76e7-4d82-999f-8093919ba0f7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173414 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/93354b1f-76e7-4d82-999f-8093919ba0f7-system-cni-dir\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173439 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-conf-dir\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173813 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-cni-binary-copy\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173858 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/93354b1f-76e7-4d82-999f-8093919ba0f7-cni-binary-copy\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173904 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-socket-dir-parent\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.173977 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c683b765-b1f2-49b1-b29d-6466cda73ca8-mcd-auth-proxy-config\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.174161 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-multus-daemon-config\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.177515 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c683b765-b1f2-49b1-b29d-6466cda73ca8-proxy-tls\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.192992 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz9q6\" (UniqueName: \"kubernetes.io/projected/7e00ee09-b0b0-4ae8-a51d-cc11fb99679b-kube-api-access-mz9q6\") pod \"multus-ch8mf\" (UID: \"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\") " pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.193040 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5nv\" (UniqueName: \"kubernetes.io/projected/c683b765-b1f2-49b1-b29d-6466cda73ca8-kube-api-access-qc5nv\") pod \"machine-config-daemon-h6xfl\" (UID: \"c683b765-b1f2-49b1-b29d-6466cda73ca8\") " pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.193053 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq6fl\" (UniqueName: \"kubernetes.io/projected/93354b1f-76e7-4d82-999f-8093919ba0f7-kube-api-access-rq6fl\") pod \"multus-additional-cni-plugins-w88nx\" (UID: \"93354b1f-76e7-4d82-999f-8093919ba0f7\") " pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.192933 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.272223 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ch8mf" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.280197 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:24:55 crc kubenswrapper[4796]: W1125 14:24:55.283479 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e00ee09_b0b0_4ae8_a51d_cc11fb99679b.slice/crio-15d591461d50cf25ba18ca5d7495bdb78e528fde3d389856c339257d8d419673 WatchSource:0}: Error finding container 15d591461d50cf25ba18ca5d7495bdb78e528fde3d389856c339257d8d419673: Status 404 returned error can't find the container with id 15d591461d50cf25ba18ca5d7495bdb78e528fde3d389856c339257d8d419673 Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.285514 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-w88nx" Nov 25 14:24:55 crc kubenswrapper[4796]: W1125 14:24:55.293462 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc683b765_b1f2_49b1_b29d_6466cda73ca8.slice/crio-e816ad0f321e1f2813b9d7ffbc8382129cd85052009d45b504dfa231b1110fc8 WatchSource:0}: Error finding container e816ad0f321e1f2813b9d7ffbc8382129cd85052009d45b504dfa231b1110fc8: Status 404 returned error can't find the container with id e816ad0f321e1f2813b9d7ffbc8382129cd85052009d45b504dfa231b1110fc8 Nov 25 14:24:55 crc kubenswrapper[4796]: W1125 14:24:55.299377 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93354b1f_76e7_4d82_999f_8093919ba0f7.slice/crio-bf9accc7d78a190b230bafe701775e5968081f46f4bd1bbce930191692b7db35 WatchSource:0}: Error finding container bf9accc7d78a190b230bafe701775e5968081f46f4bd1bbce930191692b7db35: Status 404 returned error can't find the container with id bf9accc7d78a190b230bafe701775e5968081f46f4bd1bbce930191692b7db35 Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.334919 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-22sz8"] Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.336682 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.340070 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.340358 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.341242 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.341425 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.341703 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.341815 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.342721 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.353023 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.363490 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374395 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-var-lib-openvswitch\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374424 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-bin\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374511 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-netns\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374543 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-openvswitch\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374590 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-config\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374700 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-ovn\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374753 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-kubelet\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374777 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-etc-openvswitch\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374797 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8srjn\" (UniqueName: \"kubernetes.io/projected/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-kube-api-access-8srjn\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovn-node-metrics-cert\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374850 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-slash\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374866 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-env-overrides\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.374931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-script-lib\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.375046 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-node-log\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.375089 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-ovn-kubernetes\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.375128 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-log-socket\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.375158 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-netd\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.375195 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-systemd-units\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.375226 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-systemd\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.379381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.408790 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.408961 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.409668 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.409762 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.409832 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.409912 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.420958 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.438586 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.462643 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.476956 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-systemd-units\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477012 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-systemd\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477031 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-var-lib-openvswitch\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477047 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-bin\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477064 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477094 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-netns\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477108 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-openvswitch\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477127 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-config\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477142 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-kubelet\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477157 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-ovn\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477182 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-etc-openvswitch\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477197 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8srjn\" (UniqueName: \"kubernetes.io/projected/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-kube-api-access-8srjn\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477213 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovn-node-metrics-cert\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477227 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-slash\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477240 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-env-overrides\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477253 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-script-lib\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-node-log\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477296 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-ovn-kubernetes\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477312 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-log-socket\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-netd\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477378 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-netd\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477423 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-systemd-units\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477442 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-systemd\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477460 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-var-lib-openvswitch\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477480 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-bin\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477500 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477525 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-netns\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.477544 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-openvswitch\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.478199 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-config\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.478428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-kubelet\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.478466 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-ovn\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.478491 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-etc-openvswitch\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.478838 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-node-log\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.478879 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-ovn-kubernetes\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.478923 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-log-socket\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.478950 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-slash\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.479151 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-script-lib\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.479444 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-env-overrides\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.482557 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovn-node-metrics-cert\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.488407 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.496192 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8srjn\" (UniqueName: \"kubernetes.io/projected/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-kube-api-access-8srjn\") pod \"ovnkube-node-22sz8\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.501708 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://483d635f462943e9b3ee4f3c5499059b3fb2c4aed2daac5e68b4ef05c1699f1e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:36Z\\\",\\\"message\\\":\\\"W1125 14:24:36.340964 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 14:24:36.341512 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764080676 cert, and key in /tmp/serving-cert-687248550/serving-signer.crt, /tmp/serving-cert-687248550/serving-signer.key\\\\nI1125 14:24:36.685623 1 observer_polling.go:159] Starting file observer\\\\nW1125 14:24:36.687690 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:24:36.687799 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:36.691232 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-687248550/tls.crt::/tmp/serving-cert-687248550/tls.key\\\\\\\"\\\\nF1125 14:24:36.903776 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.513311 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.523744 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.536694 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.549398 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4"} Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.549459 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"e816ad0f321e1f2813b9d7ffbc8382129cd85052009d45b504dfa231b1110fc8"} Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.550294 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.551550 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nz5r2" event={"ID":"ea9ffd59-232e-4973-8470-910389b782ad","Type":"ContainerStarted","Data":"6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff"} Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.551603 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nz5r2" event={"ID":"ea9ffd59-232e-4973-8470-910389b782ad","Type":"ContainerStarted","Data":"4a190dba506593fb99e19cff7a34129441963ed9a709b7df3976f251aa0ec1c0"} Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.552890 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09"} Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.554512 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.556678 4796 scope.go:117] "RemoveContainer" containerID="f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1" Nov 25 14:24:55 crc kubenswrapper[4796]: E1125 14:24:55.556814 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.557634 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerStarted","Data":"bf9accc7d78a190b230bafe701775e5968081f46f4bd1bbce930191692b7db35"} Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.558668 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ch8mf" event={"ID":"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b","Type":"ContainerStarted","Data":"66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1"} Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.558689 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ch8mf" event={"ID":"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b","Type":"ContainerStarted","Data":"15d591461d50cf25ba18ca5d7495bdb78e528fde3d389856c339257d8d419673"} Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.564544 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.576002 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.588067 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.601957 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.612107 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.621197 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.636637 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.647933 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.658839 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.663240 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.672497 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.696239 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.739932 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.775330 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:55 crc kubenswrapper[4796]: I1125 14:24:55.816033 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.565178 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb" exitCode=0 Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.565275 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb"} Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.565364 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"7a902735743e7e9e812461d34e94610342c5c2ec20371f5a4e9316517b3f1e93"} Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.567791 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87"} Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.571436 4796 generic.go:334] "Generic (PLEG): container finished" podID="93354b1f-76e7-4d82-999f-8093919ba0f7" containerID="bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a" exitCode=0 Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.571536 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerDied","Data":"bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a"} Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.579520 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233"} Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.590712 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.620716 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.638420 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.659189 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.674285 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.690923 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.706243 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.722400 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.735543 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.754343 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.774526 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.789193 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.807876 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.820841 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.831059 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.847162 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.863642 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.877854 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.893478 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.906226 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.921617 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.934946 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.952532 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.972074 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.991027 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.991176 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:24:56 crc kubenswrapper[4796]: E1125 14:24:56.991384 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:25:00.991365447 +0000 UTC m=+29.334474871 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:24:56 crc kubenswrapper[4796]: I1125 14:24:56.991429 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:56 crc kubenswrapper[4796]: E1125 14:24:56.991580 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:24:56 crc kubenswrapper[4796]: E1125 14:24:56.991654 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:00.991619765 +0000 UTC m=+29.334729199 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.006466 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.092814 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.092854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.092884 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.092990 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.093003 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.093014 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.093053 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:01.093040822 +0000 UTC m=+29.436150246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.093097 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.093116 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:01.093111105 +0000 UTC m=+29.436220529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.093151 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.093160 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.093166 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.093185 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:01.093178757 +0000 UTC m=+29.436288181 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.233390 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xphp5"] Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.233738 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.235324 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.235711 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.236405 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.236561 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.246818 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.256005 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.274087 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.285164 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.294206 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cnlh\" (UniqueName: \"kubernetes.io/projected/0df57e51-3d48-434e-95e6-3d001fbf2871-kube-api-access-8cnlh\") pod \"node-ca-xphp5\" (UID: \"0df57e51-3d48-434e-95e6-3d001fbf2871\") " pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.294263 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0df57e51-3d48-434e-95e6-3d001fbf2871-host\") pod \"node-ca-xphp5\" (UID: \"0df57e51-3d48-434e-95e6-3d001fbf2871\") " pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.294284 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0df57e51-3d48-434e-95e6-3d001fbf2871-serviceca\") pod \"node-ca-xphp5\" (UID: \"0df57e51-3d48-434e-95e6-3d001fbf2871\") " pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.295096 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.309603 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.336135 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.352233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.364383 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.377466 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.393607 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.394712 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cnlh\" (UniqueName: \"kubernetes.io/projected/0df57e51-3d48-434e-95e6-3d001fbf2871-kube-api-access-8cnlh\") pod \"node-ca-xphp5\" (UID: \"0df57e51-3d48-434e-95e6-3d001fbf2871\") " pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.394764 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0df57e51-3d48-434e-95e6-3d001fbf2871-host\") pod \"node-ca-xphp5\" (UID: \"0df57e51-3d48-434e-95e6-3d001fbf2871\") " pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.394783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0df57e51-3d48-434e-95e6-3d001fbf2871-serviceca\") pod \"node-ca-xphp5\" (UID: \"0df57e51-3d48-434e-95e6-3d001fbf2871\") " pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.394869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0df57e51-3d48-434e-95e6-3d001fbf2871-host\") pod \"node-ca-xphp5\" (UID: \"0df57e51-3d48-434e-95e6-3d001fbf2871\") " pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.395588 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0df57e51-3d48-434e-95e6-3d001fbf2871-serviceca\") pod \"node-ca-xphp5\" (UID: \"0df57e51-3d48-434e-95e6-3d001fbf2871\") " pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.408517 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.408667 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.409039 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.409104 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.409153 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:24:57 crc kubenswrapper[4796]: E1125 14:24:57.409203 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.429855 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cnlh\" (UniqueName: \"kubernetes.io/projected/0df57e51-3d48-434e-95e6-3d001fbf2871-kube-api-access-8cnlh\") pod \"node-ca-xphp5\" (UID: \"0df57e51-3d48-434e-95e6-3d001fbf2871\") " pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.436330 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.478298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.515842 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.545020 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xphp5" Nov 25 14:24:57 crc kubenswrapper[4796]: W1125 14:24:57.569238 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0df57e51_3d48_434e_95e6_3d001fbf2871.slice/crio-3e5bae667f57e084c9985ae9335a1d7eebf76904f00c15a73b3adbc0d5196403 WatchSource:0}: Error finding container 3e5bae667f57e084c9985ae9335a1d7eebf76904f00c15a73b3adbc0d5196403: Status 404 returned error can't find the container with id 3e5bae667f57e084c9985ae9335a1d7eebf76904f00c15a73b3adbc0d5196403 Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.585706 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.586258 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446" exitCode=1 Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.586313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.586339 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.586349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.586357 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446"} Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.586367 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.587028 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xphp5" event={"ID":"0df57e51-3d48-434e-95e6-3d001fbf2871","Type":"ContainerStarted","Data":"3e5bae667f57e084c9985ae9335a1d7eebf76904f00c15a73b3adbc0d5196403"} Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.588201 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerStarted","Data":"a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8"} Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.601884 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.618988 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.637002 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.678944 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.716519 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.758246 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.795442 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.837637 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.879673 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.921997 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.955851 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:57 crc kubenswrapper[4796]: I1125 14:24:57.998797 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.026557 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.027985 4796 scope.go:117] "RemoveContainer" containerID="f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1" Nov 25 14:24:58 crc kubenswrapper[4796]: E1125 14:24:58.028365 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.036424 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.085025 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.598009 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.599256 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.602493 4796 generic.go:334] "Generic (PLEG): container finished" podID="93354b1f-76e7-4d82-999f-8093919ba0f7" containerID="a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8" exitCode=0 Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.602617 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerDied","Data":"a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8"} Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.604845 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xphp5" event={"ID":"0df57e51-3d48-434e-95e6-3d001fbf2871","Type":"ContainerStarted","Data":"8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5"} Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.614725 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.638284 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.649202 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.662481 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.681156 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.696005 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.714620 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.733654 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.751811 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.767383 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.778823 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.798498 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.813932 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.833319 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.845615 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.856811 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.867388 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.879951 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.891833 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.905218 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.916472 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.955334 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:58 crc kubenswrapper[4796]: I1125 14:24:58.995851 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.039253 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.075480 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.115934 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.155647 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.207423 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.303547 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.309936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.309995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.310016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.310187 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.318333 4796 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.318622 4796 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.320030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.320130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.320152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.320222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.320242 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: E1125 14:24:59.349549 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.354797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.354847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.354865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.354887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.354903 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: E1125 14:24:59.373422 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.377721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.377759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.377769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.377784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.377794 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: E1125 14:24:59.394030 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.398437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.398504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.398532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.398568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.398630 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.408883 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.408939 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.408884 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:24:59 crc kubenswrapper[4796]: E1125 14:24:59.409074 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:24:59 crc kubenswrapper[4796]: E1125 14:24:59.409188 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:24:59 crc kubenswrapper[4796]: E1125 14:24:59.409277 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:24:59 crc kubenswrapper[4796]: E1125 14:24:59.417070 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.422171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.422225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.422242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.422261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.422276 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: E1125 14:24:59.439454 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: E1125 14:24:59.439850 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.441761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.441787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.441799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.441814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.441826 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.544004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.544064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.544082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.544106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.544123 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.611171 4796 generic.go:334] "Generic (PLEG): container finished" podID="93354b1f-76e7-4d82-999f-8093919ba0f7" containerID="538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86" exitCode=0 Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.611266 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerDied","Data":"538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86"} Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.641088 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.647430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.647483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.647499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.647521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.647538 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.659546 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.693542 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.706412 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.721842 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.735197 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.747113 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.750679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.750715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.750725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.750739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.750762 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.762733 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.779106 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.791649 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.802070 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.812131 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.827199 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.836896 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:24:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.852807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.852855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.852869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.852889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.852906 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.955450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.955505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.955520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.955541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:24:59 crc kubenswrapper[4796]: I1125 14:24:59.955556 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:24:59Z","lastTransitionTime":"2025-11-25T14:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.059100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.059160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.059180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.059205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.059223 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.162520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.162561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.162602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.162621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.162632 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.266765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.267157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.267179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.267549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.267842 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.371467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.371517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.371535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.371561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.371605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.474986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.475052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.475076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.475110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.475132 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.578374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.578453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.578471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.578498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.578515 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.620970 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.622061 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.626622 4796 generic.go:334] "Generic (PLEG): container finished" podID="93354b1f-76e7-4d82-999f-8093919ba0f7" containerID="0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131" exitCode=0 Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.626665 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerDied","Data":"0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.649742 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.668082 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.682207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.682266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.682283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.682314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.682335 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.697438 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.723845 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.741755 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.760877 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.776903 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.785979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.786026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.786045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.786069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.786086 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.798928 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.822812 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.838895 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.860761 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.880974 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.888606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.888675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.888699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.888730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.888751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.928533 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.953418 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.995125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.995172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.995185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.995202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:00 crc kubenswrapper[4796]: I1125 14:25:00.995213 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:00Z","lastTransitionTime":"2025-11-25T14:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.032108 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.032529 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:25:09.032497161 +0000 UTC m=+37.375606615 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.032628 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.032840 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.032977 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:09.032941924 +0000 UTC m=+37.376051388 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.098872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.098949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.098977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.099011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.099036 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:01Z","lastTransitionTime":"2025-11-25T14:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.133610 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.133681 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.133709 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.133883 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.133911 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.133929 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.133923 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.133977 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.133999 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:09.133981979 +0000 UTC m=+37.477091413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.133998 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.134005 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.134126 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:09.134094582 +0000 UTC m=+37.477204016 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.134252 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:09.134239407 +0000 UTC m=+37.477348841 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.201895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.201953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.201970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.201996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.202013 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:01Z","lastTransitionTime":"2025-11-25T14:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.305833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.305900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.305920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.305947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.305964 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:01Z","lastTransitionTime":"2025-11-25T14:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.408341 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.408522 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.408916 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.409083 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.408915 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:01 crc kubenswrapper[4796]: E1125 14:25:01.409241 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.409632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.409683 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.409700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.409725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.409744 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:01Z","lastTransitionTime":"2025-11-25T14:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.512761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.512819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.512836 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.512864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.512880 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:01Z","lastTransitionTime":"2025-11-25T14:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.616525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.616607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.616625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.616645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.616663 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:01Z","lastTransitionTime":"2025-11-25T14:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.635134 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerStarted","Data":"f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.658559 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.681257 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.703305 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.719918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.719982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.720003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.720030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.720061 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:01Z","lastTransitionTime":"2025-11-25T14:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.725276 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.742325 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.759157 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.782314 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.802274 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.818213 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.822858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.822902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.822914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.822936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.822949 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:01Z","lastTransitionTime":"2025-11-25T14:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.839683 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.860542 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.880308 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.915827 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.925811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.925901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.925930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.925964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.925987 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:01Z","lastTransitionTime":"2025-11-25T14:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:01 crc kubenswrapper[4796]: I1125 14:25:01.935665 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.029681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.029754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.029774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.029801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.029821 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.132533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.132607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.132624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.132647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.132661 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.235130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.235762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.235786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.235810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.235827 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.338981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.339058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.339080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.339114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.339139 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.429646 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.441794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.441832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.441843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.441860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.441872 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.454027 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.471084 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.500122 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.520245 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.540977 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.546174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.546208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.546221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.546238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.546252 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.560450 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.577482 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.589159 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.604880 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.620830 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.635386 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.643097 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.643968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.644770 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.644817 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.645204 4796 scope.go:117] "RemoveContainer" containerID="59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.649300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.649336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.649353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.649373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.649388 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.651977 4796 generic.go:334] "Generic (PLEG): container finished" podID="93354b1f-76e7-4d82-999f-8093919ba0f7" containerID="f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0" exitCode=0 Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.652032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerDied","Data":"f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.653442 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.674297 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.678752 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.681406 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.689233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.703775 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.716429 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.729349 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.741881 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.751855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.751900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.751912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.751929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.751941 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.757430 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.769291 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.780940 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.793298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.808259 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.820184 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.832547 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.845456 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.855468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.855501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.855510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.855523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.855532 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.865386 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.878236 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.887597 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.905411 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.920831 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.934552 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.944273 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.955635 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.957416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.957473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.957482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.957495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.957503 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:02Z","lastTransitionTime":"2025-11-25T14:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.969100 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.982832 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:02 crc kubenswrapper[4796]: I1125 14:25:02.997626 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.008778 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.019669 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.030454 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.044678 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.059427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.059480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.059496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.059514 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.059527 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.161989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.162035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.162046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.162062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.162073 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.265359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.265600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.265644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.265675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.265701 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.368802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.369263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.369282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.369310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.369329 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.409344 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.409400 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.409400 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:03 crc kubenswrapper[4796]: E1125 14:25:03.409533 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:03 crc kubenswrapper[4796]: E1125 14:25:03.409780 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:03 crc kubenswrapper[4796]: E1125 14:25:03.409958 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.472909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.473092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.473144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.473178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.473199 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.575995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.576035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.576044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.576061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.576070 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.660885 4796 generic.go:334] "Generic (PLEG): container finished" podID="93354b1f-76e7-4d82-999f-8093919ba0f7" containerID="b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502" exitCode=0 Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.660956 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerDied","Data":"b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.667736 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.669184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.669356 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.679534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.679659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.679678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.679727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.679744 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.680638 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.699464 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.724892 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.740927 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.765372 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.782953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.783026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.783039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.783056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.783071 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.783463 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.818146 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.834227 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.846797 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.867701 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.881085 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.884955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.884993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.885002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.885014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.885025 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.894230 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.906865 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.919511 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.932981 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.946385 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.960834 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.975671 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.987356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.987386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.987397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.987415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.987429 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:03Z","lastTransitionTime":"2025-11-25T14:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:03 crc kubenswrapper[4796]: I1125 14:25:03.991087 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.007314 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.022666 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.048186 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.064101 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.079413 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.090633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.090696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.090714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.090739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.090755 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:04Z","lastTransitionTime":"2025-11-25T14:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.093415 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.107052 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.134326 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.146267 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.193487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.193525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.193536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.193552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.193563 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:04Z","lastTransitionTime":"2025-11-25T14:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.296089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.296144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.296164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.296186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.296203 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:04Z","lastTransitionTime":"2025-11-25T14:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.398863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.398932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.398949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.398971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.398988 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:04Z","lastTransitionTime":"2025-11-25T14:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.502074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.502127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.502144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.502167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.502185 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:04Z","lastTransitionTime":"2025-11-25T14:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.604986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.605048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.605065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.605091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.605109 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:04Z","lastTransitionTime":"2025-11-25T14:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.681209 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" event={"ID":"93354b1f-76e7-4d82-999f-8093919ba0f7","Type":"ContainerStarted","Data":"8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.681264 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.706061 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.708594 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.708642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.708668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.708699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.708729 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:04Z","lastTransitionTime":"2025-11-25T14:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.724052 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.742208 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.760231 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.784991 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.804138 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.811453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.811509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.811528 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.811553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.811604 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:04Z","lastTransitionTime":"2025-11-25T14:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.822422 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.838849 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.856971 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.879442 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.893798 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.913607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.913636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.913647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.913665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.913677 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:04Z","lastTransitionTime":"2025-11-25T14:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.913553 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.929259 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:04 crc kubenswrapper[4796]: I1125 14:25:04.946066 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.016722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.016793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.016807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.016826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.016842 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.119910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.119975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.119992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.120019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.120038 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.223553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.223668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.223688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.223717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.223739 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.326566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.326621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.326630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.326644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.326654 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.408687 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.408717 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:05 crc kubenswrapper[4796]: E1125 14:25:05.408807 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:05 crc kubenswrapper[4796]: E1125 14:25:05.408905 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.408985 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:05 crc kubenswrapper[4796]: E1125 14:25:05.409043 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.429063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.429091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.429103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.429117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.429129 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.531656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.531705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.531741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.531765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.531783 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.635105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.635158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.635174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.635197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.635211 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.738718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.738779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.738796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.738822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.738842 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.841806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.841869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.841885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.841908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.841925 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.945096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.945131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.945143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.945160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:05 crc kubenswrapper[4796]: I1125 14:25:05.945175 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:05Z","lastTransitionTime":"2025-11-25T14:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.048778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.048838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.048854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.048878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.048895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.151697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.151750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.151762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.151780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.151792 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.254975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.255042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.255060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.255085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.255104 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.357482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.357540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.357562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.357617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.357635 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.460390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.460437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.460453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.460476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.460491 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.564261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.564321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.564338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.564362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.564381 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.666553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.666680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.666705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.666735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.666756 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.692318 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/0.log" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.695436 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.696770 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7" exitCode=1 Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.696829 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.698083 4796 scope.go:117] "RemoveContainer" containerID="b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.720501 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.739955 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.763288 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.770171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.770241 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.770265 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.770294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.770315 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.785243 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.805185 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.838664 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.863059 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.873360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.873413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.873436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.873465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.873483 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.881399 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.902780 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.921947 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.945886 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.966674 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.976229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.976319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.976348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.976377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.976400 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:06Z","lastTransitionTime":"2025-11-25T14:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:06 crc kubenswrapper[4796]: I1125 14:25:06.985649 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.015901 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:06Z\\\",\\\"message\\\":\\\"4:25:06.026498 6055 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:06.026511 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:06.026521 6055 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:06.026564 6055 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:06.026660 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:25:06.026726 6055 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:06.026739 6055 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:25:06.026760 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:06.026768 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:06.026791 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:06.026816 6055 factory.go:656] Stopping watch factory\\\\nI1125 14:25:06.026820 6055 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:06.026832 6055 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:06.026834 6055 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:06.026878 6055 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:25:06.026874 6055 handler.go:208] Removed *v1.Node even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.079223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.079273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.079292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.079316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.079335 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:07Z","lastTransitionTime":"2025-11-25T14:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.182472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.182547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.182568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.182635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.182676 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:07Z","lastTransitionTime":"2025-11-25T14:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.286871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.286911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.286928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.286950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.286966 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:07Z","lastTransitionTime":"2025-11-25T14:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.390255 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.390288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.390299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.390316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.390326 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:07Z","lastTransitionTime":"2025-11-25T14:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.408366 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.408421 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.408420 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:07 crc kubenswrapper[4796]: E1125 14:25:07.408522 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:07 crc kubenswrapper[4796]: E1125 14:25:07.408711 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:07 crc kubenswrapper[4796]: E1125 14:25:07.408860 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.493723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.493778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.493796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.493821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.493837 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:07Z","lastTransitionTime":"2025-11-25T14:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.601243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.601306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.601327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.601365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.601385 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:07Z","lastTransitionTime":"2025-11-25T14:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.705676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.705713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.705725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.705740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.705751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:07Z","lastTransitionTime":"2025-11-25T14:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.708297 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/0.log" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.712966 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.713943 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.714079 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.739842 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.760863 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.790910 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:06Z\\\",\\\"message\\\":\\\"4:25:06.026498 6055 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:06.026511 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:06.026521 6055 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:06.026564 6055 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:06.026660 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:25:06.026726 6055 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:06.026739 6055 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:25:06.026760 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:06.026768 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:06.026791 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:06.026816 6055 factory.go:656] Stopping watch factory\\\\nI1125 14:25:06.026820 6055 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:06.026832 6055 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:06.026834 6055 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:06.026878 6055 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:25:06.026874 6055 handler.go:208] Removed *v1.Node even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.805171 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.807885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.807926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.807939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.807955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.807967 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:07Z","lastTransitionTime":"2025-11-25T14:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.819929 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.832025 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.842505 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.855195 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.870433 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.884564 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.896735 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.910395 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.910529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.910591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.910604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.910624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.910637 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:07Z","lastTransitionTime":"2025-11-25T14:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.931409 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.943199 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.955654 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g"] Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.956308 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.958900 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.959710 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.969628 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:07 crc kubenswrapper[4796]: I1125 14:25:07.980345 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.009561 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d223a119-11a5-4802-9e8e-645fdb31ea88-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.009717 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wqfc\" (UniqueName: \"kubernetes.io/projected/d223a119-11a5-4802-9e8e-645fdb31ea88-kube-api-access-2wqfc\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.009829 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d223a119-11a5-4802-9e8e-645fdb31ea88-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.009885 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d223a119-11a5-4802-9e8e-645fdb31ea88-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.009739 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:06Z\\\",\\\"message\\\":\\\"4:25:06.026498 6055 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:06.026511 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:06.026521 6055 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:06.026564 6055 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:06.026660 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:25:06.026726 6055 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:06.026739 6055 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:25:06.026760 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:06.026768 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:06.026791 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:06.026816 6055 factory.go:656] Stopping watch factory\\\\nI1125 14:25:06.026820 6055 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:06.026832 6055 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:06.026834 6055 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:06.026878 6055 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:25:06.026874 6055 handler.go:208] Removed *v1.Node even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.013525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.013638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.013666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.013702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.013728 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.035970 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.048964 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.063102 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.083152 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.097769 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.110963 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d223a119-11a5-4802-9e8e-645fdb31ea88-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.111053 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d223a119-11a5-4802-9e8e-645fdb31ea88-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.111160 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d223a119-11a5-4802-9e8e-645fdb31ea88-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.111251 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wqfc\" (UniqueName: \"kubernetes.io/projected/d223a119-11a5-4802-9e8e-645fdb31ea88-kube-api-access-2wqfc\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.112286 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d223a119-11a5-4802-9e8e-645fdb31ea88-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.112507 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.112740 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d223a119-11a5-4802-9e8e-645fdb31ea88-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.117539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.117611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.117626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.117644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.117658 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.118320 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d223a119-11a5-4802-9e8e-645fdb31ea88-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.130646 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.132695 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wqfc\" (UniqueName: \"kubernetes.io/projected/d223a119-11a5-4802-9e8e-645fdb31ea88-kube-api-access-2wqfc\") pod \"ovnkube-control-plane-749d76644c-lsz8g\" (UID: \"d223a119-11a5-4802-9e8e-645fdb31ea88\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.146456 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.161410 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.174962 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.184544 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.196475 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.221124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.221173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.221186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.221204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.221219 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.275307 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" Nov 25 14:25:08 crc kubenswrapper[4796]: W1125 14:25:08.296593 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd223a119_11a5_4802_9e8e_645fdb31ea88.slice/crio-c49207f27558e508d394b5752faad0edf8e33809569c4039d2d5a6d728cc93bd WatchSource:0}: Error finding container c49207f27558e508d394b5752faad0edf8e33809569c4039d2d5a6d728cc93bd: Status 404 returned error can't find the container with id c49207f27558e508d394b5752faad0edf8e33809569c4039d2d5a6d728cc93bd Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.325964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.326028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.326045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.326069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.326087 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.429357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.429421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.429439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.429460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.429479 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.532914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.532966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.532983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.533009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.533025 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.636007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.636050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.636066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.636089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.636106 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.723486 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" event={"ID":"d223a119-11a5-4802-9e8e-645fdb31ea88","Type":"ContainerStarted","Data":"021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.723620 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" event={"ID":"d223a119-11a5-4802-9e8e-645fdb31ea88","Type":"ContainerStarted","Data":"c49207f27558e508d394b5752faad0edf8e33809569c4039d2d5a6d728cc93bd"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.727098 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/1.log" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.728828 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/0.log" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.734663 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.737231 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689" exitCode=1 Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.737292 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.737349 4796 scope.go:117] "RemoveContainer" containerID="b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.738782 4796 scope.go:117] "RemoveContainer" containerID="1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689" Nov 25 14:25:08 crc kubenswrapper[4796]: E1125 14:25:08.739046 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.741008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.741050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.741110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.741135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.741189 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.760827 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.784436 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.813443 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.830854 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.845168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.845223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.845244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.845264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.845278 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.851466 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.871218 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.891623 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.924670 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:06Z\\\",\\\"message\\\":\\\"4:25:06.026498 6055 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:06.026511 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:06.026521 6055 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:06.026564 6055 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:06.026660 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:25:06.026726 6055 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:06.026739 6055 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:25:06.026760 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:06.026768 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:06.026791 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:06.026816 6055 factory.go:656] Stopping watch factory\\\\nI1125 14:25:06.026820 6055 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:06.026832 6055 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:06.026834 6055 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:06.026878 6055 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:25:06.026874 6055 handler.go:208] Removed *v1.Node even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:08Z\\\",\\\"message\\\":\\\"twork controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:25:08.157195 6254 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-h6xfl after 0 failed attempt(s)\\\\nI1125 14:25:08.157204 6254 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-h6xfl\\\\nI1125 14:25:08.156865 6254 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nz5r2\\\\nI1125 14:25:08.156947 6254 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.942341 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.947899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.948096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.948186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.948290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.948374 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:08Z","lastTransitionTime":"2025-11-25T14:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.955070 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.969727 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.984074 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:08 crc kubenswrapper[4796]: I1125 14:25:08.996704 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.012069 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.025794 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.051171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.051477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.051592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.051686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.051836 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.085720 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n4f9r"] Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.086535 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.086715 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.099301 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.117837 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:06Z\\\",\\\"message\\\":\\\"4:25:06.026498 6055 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:06.026511 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:06.026521 6055 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:06.026564 6055 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:06.026660 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:25:06.026726 6055 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:06.026739 6055 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:25:06.026760 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:06.026768 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:06.026791 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:06.026816 6055 factory.go:656] Stopping watch factory\\\\nI1125 14:25:06.026820 6055 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:06.026832 6055 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:06.026834 6055 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:06.026878 6055 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:25:06.026874 6055 handler.go:208] Removed *v1.Node even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:08Z\\\",\\\"message\\\":\\\"twork controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:25:08.157195 6254 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-h6xfl after 0 failed attempt(s)\\\\nI1125 14:25:08.157204 6254 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-h6xfl\\\\nI1125 14:25:08.156865 6254 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nz5r2\\\\nI1125 14:25:08.156947 6254 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.125132 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.125362 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:25:25.125321502 +0000 UTC m=+53.468431096 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.125451 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.125660 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.125777 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:25.125747516 +0000 UTC m=+53.468856980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.132886 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.149429 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.154184 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.154228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.154241 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.154261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.154276 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.164315 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.183054 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.196671 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.209847 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.226356 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.226414 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.226460 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8gd7\" (UniqueName: \"kubernetes.io/projected/a07d588f-1940-4a4b-a4a9-94451e43ec8d-kube-api-access-h8gd7\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.226492 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.226536 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.226784 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.226807 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.226881 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.226818 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.226906 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.226926 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.226996 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:25.226977716 +0000 UTC m=+53.570087150 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.227051 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:25.227037658 +0000 UTC m=+53.570147092 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.227561 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.227798 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:25.227700959 +0000 UTC m=+53.570810423 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.229768 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.244674 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.258090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.258167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.258234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.258262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.258283 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.262482 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.280741 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.302048 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.316341 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.327183 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8gd7\" (UniqueName: \"kubernetes.io/projected/a07d588f-1940-4a4b-a4a9-94451e43ec8d-kube-api-access-h8gd7\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.327262 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.327481 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.327569 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs podName:a07d588f-1940-4a4b-a4a9-94451e43ec8d nodeName:}" failed. No retries permitted until 2025-11-25 14:25:09.827541276 +0000 UTC m=+38.170650740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs") pod "network-metrics-daemon-n4f9r" (UID: "a07d588f-1940-4a4b-a4a9-94451e43ec8d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.332687 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.351151 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.353144 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8gd7\" (UniqueName: \"kubernetes.io/projected/a07d588f-1940-4a4b-a4a9-94451e43ec8d-kube-api-access-h8gd7\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.361209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.361484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.361689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.361844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.362028 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.408609 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.408647 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.408672 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.409210 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.409490 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.409232 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.466307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.466368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.466398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.466427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.466445 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.569815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.569877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.569896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.569921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.569940 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.674037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.674136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.674162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.674195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.674217 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.744406 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/1.log" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.748844 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.752348 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" event={"ID":"d223a119-11a5-4802-9e8e-645fdb31ea88","Type":"ContainerStarted","Data":"ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.765410 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.776899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.776943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.776957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.776975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.776991 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.795871 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:06Z\\\",\\\"message\\\":\\\"4:25:06.026498 6055 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:06.026511 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:06.026521 6055 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:06.026564 6055 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:06.026660 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:25:06.026726 6055 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:06.026739 6055 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:25:06.026760 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:06.026768 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:06.026791 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:06.026816 6055 factory.go:656] Stopping watch factory\\\\nI1125 14:25:06.026820 6055 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:06.026832 6055 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:06.026834 6055 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:06.026878 6055 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:25:06.026874 6055 handler.go:208] Removed *v1.Node even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:08Z\\\",\\\"message\\\":\\\"twork controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:25:08.157195 6254 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-h6xfl after 0 failed attempt(s)\\\\nI1125 14:25:08.157204 6254 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-h6xfl\\\\nI1125 14:25:08.156865 6254 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nz5r2\\\\nI1125 14:25:08.156947 6254 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.808882 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.824338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.824403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.824420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.824442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.824458 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.825159 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.836911 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.837686 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.837816 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs podName:a07d588f-1940-4a4b-a4a9-94451e43ec8d nodeName:}" failed. No retries permitted until 2025-11-25 14:25:10.83778578 +0000 UTC m=+39.180895304 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs") pod "network-metrics-daemon-n4f9r" (UID: "a07d588f-1940-4a4b-a4a9-94451e43ec8d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.844843 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.845393 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.849555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.849601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.849611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.849625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.849636 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.864870 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.865033 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.870351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.870403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.870422 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.870448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.870469 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.881322 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.890363 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.897032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.897091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.897103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.897122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.897133 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.897567 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.913514 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.919253 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.919660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.919718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.919736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.919762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.919780 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.933035 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.937195 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: E1125 14:25:09.937415 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.939228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.939279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.939297 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.939317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.939333 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:09Z","lastTransitionTime":"2025-11-25T14:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.949543 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.964296 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.980953 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:09 crc kubenswrapper[4796]: I1125 14:25:09.994743 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.013120 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.028626 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.041980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.042031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.042047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.042074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.042091 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.144228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.144299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.144323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.144354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.144374 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.247294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.247360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.247383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.247412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.247433 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.350213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.350274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.350297 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.350328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.350350 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.452925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.452987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.453006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.453030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.453047 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.555230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.555298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.555319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.555344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.555363 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.657789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.657853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.657871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.657895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.657952 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.760279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.760339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.760357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.760380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.760397 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.853472 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:10 crc kubenswrapper[4796]: E1125 14:25:10.853627 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:10 crc kubenswrapper[4796]: E1125 14:25:10.853672 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs podName:a07d588f-1940-4a4b-a4a9-94451e43ec8d nodeName:}" failed. No retries permitted until 2025-11-25 14:25:12.853659662 +0000 UTC m=+41.196769086 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs") pod "network-metrics-daemon-n4f9r" (UID: "a07d588f-1940-4a4b-a4a9-94451e43ec8d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.862907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.862935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.862945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.862961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.862986 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.966031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.966124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.966146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.966170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:10 crc kubenswrapper[4796]: I1125 14:25:10.966188 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:10Z","lastTransitionTime":"2025-11-25T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.069759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.069823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.069880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.069913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.069935 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:11Z","lastTransitionTime":"2025-11-25T14:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.173519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.173649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.173669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.173699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.173717 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:11Z","lastTransitionTime":"2025-11-25T14:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.277654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.277734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.277762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.277795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.277820 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:11Z","lastTransitionTime":"2025-11-25T14:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.383529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.384234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.384440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.384841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.385060 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:11Z","lastTransitionTime":"2025-11-25T14:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.408346 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.408385 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.408423 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.408464 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:11 crc kubenswrapper[4796]: E1125 14:25:11.408627 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:11 crc kubenswrapper[4796]: E1125 14:25:11.408712 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:11 crc kubenswrapper[4796]: E1125 14:25:11.408865 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:11 crc kubenswrapper[4796]: E1125 14:25:11.409047 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.414424 4796 scope.go:117] "RemoveContainer" containerID="f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.488142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.488750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.488770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.488796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.488812 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:11Z","lastTransitionTime":"2025-11-25T14:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.592146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.592210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.592229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.592257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.592275 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:11Z","lastTransitionTime":"2025-11-25T14:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.696911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.696968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.696987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.697011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.697028 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:11Z","lastTransitionTime":"2025-11-25T14:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.762873 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.765843 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.766709 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.792407 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.799084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.799225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.799326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.799446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.799548 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:11Z","lastTransitionTime":"2025-11-25T14:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.815453 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.842028 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.860220 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.874347 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.892308 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.901920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.902044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.902113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.902178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.902243 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:11Z","lastTransitionTime":"2025-11-25T14:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.908484 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.923810 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.941041 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.972071 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:11 crc kubenswrapper[4796]: I1125 14:25:11.986475 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.003799 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.005957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.006034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.006061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.006096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.006120 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.025364 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.048323 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.066471 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.096754 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:06Z\\\",\\\"message\\\":\\\"4:25:06.026498 6055 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:06.026511 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:06.026521 6055 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:06.026564 6055 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:06.026660 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:25:06.026726 6055 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:06.026739 6055 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:25:06.026760 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:06.026768 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:06.026791 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:06.026816 6055 factory.go:656] Stopping watch factory\\\\nI1125 14:25:06.026820 6055 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:06.026832 6055 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:06.026834 6055 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:06.026878 6055 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:25:06.026874 6055 handler.go:208] Removed *v1.Node even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:08Z\\\",\\\"message\\\":\\\"twork controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:25:08.157195 6254 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-h6xfl after 0 failed attempt(s)\\\\nI1125 14:25:08.157204 6254 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-h6xfl\\\\nI1125 14:25:08.156865 6254 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nz5r2\\\\nI1125 14:25:08.156947 6254 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.108833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.109002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.109126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.109277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.109398 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.213005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.213387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.213672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.213858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.214065 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.317493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.317558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.317616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.317646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.317663 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.420125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.420168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.420179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.420194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.420205 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.437396 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.456500 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.477105 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.497905 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.518721 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.521635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.521676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.521688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.521707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.521724 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.535635 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.546498 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.556740 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.568864 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.578875 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.598172 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.606814 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.618198 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.626031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.626069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.626083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.626106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.626120 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.629939 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.644063 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.666998 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b044671af0b40722e3c8b08674597b4f20edc3b99e39013ec71656b16f222bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:06Z\\\",\\\"message\\\":\\\"4:25:06.026498 6055 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:06.026511 6055 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:06.026521 6055 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:06.026564 6055 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:06.026660 6055 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:25:06.026726 6055 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:06.026739 6055 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:25:06.026760 6055 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:06.026768 6055 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:06.026791 6055 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:06.026816 6055 factory.go:656] Stopping watch factory\\\\nI1125 14:25:06.026820 6055 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:06.026832 6055 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:06.026834 6055 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:06.026878 6055 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:25:06.026874 6055 handler.go:208] Removed *v1.Node even\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:08Z\\\",\\\"message\\\":\\\"twork controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:25:08.157195 6254 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-h6xfl after 0 failed attempt(s)\\\\nI1125 14:25:08.157204 6254 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-h6xfl\\\\nI1125 14:25:08.156865 6254 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nz5r2\\\\nI1125 14:25:08.156947 6254 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.728664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.728712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.728723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.728741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.728751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.830682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.830762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.830785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.830806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.830821 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.876202 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:12 crc kubenswrapper[4796]: E1125 14:25:12.876360 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:12 crc kubenswrapper[4796]: E1125 14:25:12.876409 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs podName:a07d588f-1940-4a4b-a4a9-94451e43ec8d nodeName:}" failed. No retries permitted until 2025-11-25 14:25:16.876394957 +0000 UTC m=+45.219504381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs") pod "network-metrics-daemon-n4f9r" (UID: "a07d588f-1940-4a4b-a4a9-94451e43ec8d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.933317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.933365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.933383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.933406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:12 crc kubenswrapper[4796]: I1125 14:25:12.933424 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:12Z","lastTransitionTime":"2025-11-25T14:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.035911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.035948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.035959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.035976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.035987 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.138377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.138427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.138440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.138457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.138470 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.242105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.242174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.242196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.242229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.242255 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.344877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.344911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.344936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.344953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.344962 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.408404 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.408519 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.408416 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:13 crc kubenswrapper[4796]: E1125 14:25:13.408644 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.408418 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:13 crc kubenswrapper[4796]: E1125 14:25:13.408749 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:13 crc kubenswrapper[4796]: E1125 14:25:13.408828 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:13 crc kubenswrapper[4796]: E1125 14:25:13.408899 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.448254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.448301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.448316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.448360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.448374 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.551668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.552033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.552175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.552311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.552428 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.656546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.656992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.657153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.657293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.657424 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.760846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.760881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.760889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.760901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.760910 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.864098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.864141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.864152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.864171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.864185 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.967370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.967433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.967451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.967474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:13 crc kubenswrapper[4796]: I1125 14:25:13.967496 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:13Z","lastTransitionTime":"2025-11-25T14:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.070419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.070485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.070505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.070533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.070551 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:14Z","lastTransitionTime":"2025-11-25T14:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.174013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.174070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.174089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.174112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.174130 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:14Z","lastTransitionTime":"2025-11-25T14:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.276741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.276802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.276814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.276831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.276842 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:14Z","lastTransitionTime":"2025-11-25T14:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.379849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.379923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.379958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.379994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.380016 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:14Z","lastTransitionTime":"2025-11-25T14:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.484378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.484429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.484442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.484460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.484473 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:14Z","lastTransitionTime":"2025-11-25T14:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.587772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.587840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.587857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.587881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.587899 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:14Z","lastTransitionTime":"2025-11-25T14:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.691365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.691410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.691423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.691440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.691449 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:14Z","lastTransitionTime":"2025-11-25T14:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.794645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.794762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.794823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.794853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.794873 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:14Z","lastTransitionTime":"2025-11-25T14:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.897508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.897593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.897611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.897661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:14 crc kubenswrapper[4796]: I1125 14:25:14.897677 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:14Z","lastTransitionTime":"2025-11-25T14:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.000818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.000882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.000904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.000933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.000955 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.103905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.103959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.103977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.104002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.104022 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.206890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.206951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.206969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.206992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.207010 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.309609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.309651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.309661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.309676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.309686 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.409051 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.409130 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.409165 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.409181 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:15 crc kubenswrapper[4796]: E1125 14:25:15.409297 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:15 crc kubenswrapper[4796]: E1125 14:25:15.409436 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:15 crc kubenswrapper[4796]: E1125 14:25:15.409526 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:15 crc kubenswrapper[4796]: E1125 14:25:15.411219 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.412282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.412325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.412339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.412360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.412372 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.515440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.515537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.515562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.515622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.515648 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.618144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.618204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.618221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.618244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.618263 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.721218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.721288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.721313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.721343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.721363 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.824451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.824498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.824515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.824537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.824555 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.927418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.927474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.927495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.927523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:15 crc kubenswrapper[4796]: I1125 14:25:15.927540 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:15Z","lastTransitionTime":"2025-11-25T14:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.030764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.030824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.030841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.030867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.030884 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.133681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.133743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.133761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.133784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.133800 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.237337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.237400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.237417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.237442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.237461 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.340686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.340730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.340747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.340769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.340787 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.443677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.443765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.443789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.443815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.443833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.547097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.547174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.547199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.547235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.547257 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.649991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.650037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.650053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.650078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.650094 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.753080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.753166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.753185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.753209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.753226 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.856137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.856211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.856230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.856255 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.856273 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.919046 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:16 crc kubenswrapper[4796]: E1125 14:25:16.919254 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:16 crc kubenswrapper[4796]: E1125 14:25:16.919375 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs podName:a07d588f-1940-4a4b-a4a9-94451e43ec8d nodeName:}" failed. No retries permitted until 2025-11-25 14:25:24.919345347 +0000 UTC m=+53.262454811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs") pod "network-metrics-daemon-n4f9r" (UID: "a07d588f-1940-4a4b-a4a9-94451e43ec8d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.959271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.959325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.959341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.959364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:16 crc kubenswrapper[4796]: I1125 14:25:16.959385 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:16Z","lastTransitionTime":"2025-11-25T14:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.063000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.063116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.063168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.063196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.063217 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.166074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.166143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.166169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.166195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.166213 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.268934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.269000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.269010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.269026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.269035 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.372561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.372620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.372632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.372648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.372659 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.408759 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.408840 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:17 crc kubenswrapper[4796]: E1125 14:25:17.408912 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:17 crc kubenswrapper[4796]: E1125 14:25:17.409012 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.408773 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:17 crc kubenswrapper[4796]: E1125 14:25:17.409133 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.408773 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:17 crc kubenswrapper[4796]: E1125 14:25:17.409239 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.474795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.474868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.474881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.474900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.474912 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.579629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.579711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.579735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.579765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.579798 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.683695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.683818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.683855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.683892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.683918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.787533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.787606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.787621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.787640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.787653 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.890562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.890662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.890682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.890710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.890728 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.994436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.994506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.994523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.994547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:17 crc kubenswrapper[4796]: I1125 14:25:17.994564 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:17Z","lastTransitionTime":"2025-11-25T14:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.097629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.097696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.097721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.097749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.097771 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:18Z","lastTransitionTime":"2025-11-25T14:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.200160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.200248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.200272 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.200296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.200313 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:18Z","lastTransitionTime":"2025-11-25T14:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.304091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.304169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.304188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.304216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.304233 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:18Z","lastTransitionTime":"2025-11-25T14:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.406779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.406859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.406930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.406963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.406984 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:18Z","lastTransitionTime":"2025-11-25T14:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.511243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.511322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.511341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.511371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.511394 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:18Z","lastTransitionTime":"2025-11-25T14:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.614360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.614463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.614512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.614542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.614561 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:18Z","lastTransitionTime":"2025-11-25T14:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.717689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.717766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.717780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.717800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.717811 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:18Z","lastTransitionTime":"2025-11-25T14:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.820127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.820182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.820200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.820226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.820304 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:18Z","lastTransitionTime":"2025-11-25T14:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.923696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.923761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.923783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.923813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:18 crc kubenswrapper[4796]: I1125 14:25:18.923833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:18Z","lastTransitionTime":"2025-11-25T14:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.027690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.027757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.027775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.027800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.027817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.131052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.131116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.131314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.131345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.131371 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.234704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.234779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.234801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.234830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.234851 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.338193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.338253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.338277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.338302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.338359 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.408602 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.408610 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:19 crc kubenswrapper[4796]: E1125 14:25:19.408744 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.408798 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.408855 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:19 crc kubenswrapper[4796]: E1125 14:25:19.409039 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:19 crc kubenswrapper[4796]: E1125 14:25:19.409169 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:19 crc kubenswrapper[4796]: E1125 14:25:19.409313 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.441143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.441193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.441205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.441222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.441234 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.543750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.543805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.543816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.543833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.543845 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.647315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.647374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.647391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.647415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.647430 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.750493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.750535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.750564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.750611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.750620 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.854637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.854732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.854763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.854799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.854817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.958031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.961945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.961973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.962034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:19 crc kubenswrapper[4796]: I1125 14:25:19.962057 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:19Z","lastTransitionTime":"2025-11-25T14:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.065254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.065300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.065318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.065340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.065356 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.133421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.133471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.133490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.133513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.133530 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: E1125 14:25:20.149430 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.153563 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.153792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.153918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.154053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.154168 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: E1125 14:25:20.167602 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.172833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.173042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.173328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.173652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.173773 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: E1125 14:25:20.191346 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.196349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.196416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.196433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.196457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.196475 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: E1125 14:25:20.216635 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.222933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.223228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.223372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.224014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.224379 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: E1125 14:25:20.249396 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: E1125 14:25:20.249692 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.252298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.252542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.252735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.252908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.253141 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.356364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.356650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.356746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.356877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.357017 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.409940 4796 scope.go:117] "RemoveContainer" containerID="1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.434192 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.448610 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.459674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.459743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.459762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.459789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.459809 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.479185 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:08Z\\\",\\\"message\\\":\\\"twork controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:25:08.157195 6254 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-h6xfl after 0 failed attempt(s)\\\\nI1125 14:25:08.157204 6254 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-h6xfl\\\\nI1125 14:25:08.156865 6254 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nz5r2\\\\nI1125 14:25:08.156947 6254 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.503452 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.525920 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.549220 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.562770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.562829 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.562847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.562873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.562891 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.571767 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.593123 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.615295 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.635769 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.656927 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.669424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.669466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.669477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.669537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.669550 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.679091 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.697528 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.712865 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.734497 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.747784 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.773567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.773649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.773668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.773694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.773743 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.810241 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/1.log" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.814403 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.815475 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.815672 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.834133 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.845354 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.874314 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:08Z\\\",\\\"message\\\":\\\"twork controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:25:08.157195 6254 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-h6xfl after 0 failed attempt(s)\\\\nI1125 14:25:08.157204 6254 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-h6xfl\\\\nI1125 14:25:08.156865 6254 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nz5r2\\\\nI1125 14:25:08.156947 6254 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.875762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.875804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.875817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.875838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.875854 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.899715 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.918930 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.941661 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.955094 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.968547 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.978247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.978300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.978318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.978342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.978358 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:20Z","lastTransitionTime":"2025-11-25T14:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.981709 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:20 crc kubenswrapper[4796]: I1125 14:25:20.995096 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.008421 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.023041 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.036221 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.053669 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.065216 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.080952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.081001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.081018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.081042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.081060 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:21Z","lastTransitionTime":"2025-11-25T14:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.084343 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.183959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.184249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.184259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.184273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.184282 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:21Z","lastTransitionTime":"2025-11-25T14:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.286791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.286850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.286868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.286891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.286909 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:21Z","lastTransitionTime":"2025-11-25T14:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.389527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.389630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.389657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.389691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.389712 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:21Z","lastTransitionTime":"2025-11-25T14:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.408823 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.408863 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.408891 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.408839 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:21 crc kubenswrapper[4796]: E1125 14:25:21.408999 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:21 crc kubenswrapper[4796]: E1125 14:25:21.409091 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:21 crc kubenswrapper[4796]: E1125 14:25:21.409265 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:21 crc kubenswrapper[4796]: E1125 14:25:21.409338 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.492450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.492512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.492533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.492557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.492602 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:21Z","lastTransitionTime":"2025-11-25T14:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.596041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.596104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.596121 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.596144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.596161 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:21Z","lastTransitionTime":"2025-11-25T14:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.699091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.699158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.699174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.699199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.699215 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:21Z","lastTransitionTime":"2025-11-25T14:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.802658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.802747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.802768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.802792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.802811 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:21Z","lastTransitionTime":"2025-11-25T14:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.821711 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/2.log" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.822656 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/1.log" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.825939 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.826961 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1" exitCode=1 Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.827021 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.827084 4796 scope.go:117] "RemoveContainer" containerID="1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.827865 4796 scope.go:117] "RemoveContainer" containerID="bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1" Nov 25 14:25:21 crc kubenswrapper[4796]: E1125 14:25:21.828035 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.851824 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.868750 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.892194 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.905866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.905929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.905947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.905971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.905988 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:21Z","lastTransitionTime":"2025-11-25T14:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.908842 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.927353 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.947272 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:21 crc kubenswrapper[4796]: I1125 14:25:21.964039 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.002074 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:08Z\\\",\\\"message\\\":\\\"twork controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:25:08.157195 6254 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-h6xfl after 0 failed attempt(s)\\\\nI1125 14:25:08.157204 6254 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-h6xfl\\\\nI1125 14:25:08.156865 6254 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nz5r2\\\\nI1125 14:25:08.156947 6254 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.008535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.008608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.008648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.008674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.008690 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.023219 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.042677 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.065120 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.086089 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.107099 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.112474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.112651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.112676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.112764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.112855 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.131541 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.151325 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.166362 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.216748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.216819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.216842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.216868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.216886 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.320182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.320256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.320275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.320298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.320315 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.423290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.423361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.423386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.423413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.423430 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.436831 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.455036 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.477343 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.483215 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.497060 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.518417 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.524652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.524695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.524708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.524723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.524732 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.541090 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.557410 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.568503 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.582792 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.595146 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.618275 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.627384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.627450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.627466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.627489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.627509 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.636977 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.660787 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.680030 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.698712 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.721803 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1baa1ac75345aee01481d2e32c30cc7b0484b9e7b4901487b4c12d6dccf6c689\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:08Z\\\",\\\"message\\\":\\\"twork controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:08Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:25:08.157195 6254 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-h6xfl after 0 failed attempt(s)\\\\nI1125 14:25:08.157204 6254 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-h6xfl\\\\nI1125 14:25:08.156865 6254 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-nz5r2\\\\nI1125 14:25:08.156947 6254 services_controller.go:443] Built service openshift-kube-controller-manager/kube-controller-manager LB cluster-wide configs for network=default: []services.lbConfig{\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.730565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.730743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.730776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.730807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.730832 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.833363 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/2.log" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.834107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.834175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.834200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.834234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.834260 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.837095 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.839197 4796 scope.go:117] "RemoveContainer" containerID="bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1" Nov 25 14:25:22 crc kubenswrapper[4796]: E1125 14:25:22.839394 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.857161 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.872455 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.895279 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.911236 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.929471 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.936458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.936501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.936519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.936545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.936562 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:22Z","lastTransitionTime":"2025-11-25T14:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.940940 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.953664 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.967819 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.984393 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:22 crc kubenswrapper[4796]: I1125 14:25:22.999733 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.012943 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.023957 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.035978 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.039008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.039120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.039260 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.039329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.039403 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.054183 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.066240 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.080255 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.142371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.142424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.142442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.142464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.142481 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.245523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.245613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.245631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.245651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.245669 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.347827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.347893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.347917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.347947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.347969 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.409064 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.409090 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.409059 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.409180 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:23 crc kubenswrapper[4796]: E1125 14:25:23.409379 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:23 crc kubenswrapper[4796]: E1125 14:25:23.409488 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:23 crc kubenswrapper[4796]: E1125 14:25:23.409637 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:23 crc kubenswrapper[4796]: E1125 14:25:23.409724 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.450826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.450889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.450911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.450943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.450967 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.554363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.554674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.554853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.554984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.555096 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.658467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.658819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.658945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.659063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.659215 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.762130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.762164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.762176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.762191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.762202 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.865468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.865564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.865611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.865636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.865656 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.969444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.969505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.969526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.969611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:23 crc kubenswrapper[4796]: I1125 14:25:23.969633 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:23Z","lastTransitionTime":"2025-11-25T14:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.072607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.072668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.072687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.072713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.072731 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:24Z","lastTransitionTime":"2025-11-25T14:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.177100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.177165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.177185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.177212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.177231 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:24Z","lastTransitionTime":"2025-11-25T14:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.280803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.280877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.280896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.280923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.280949 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:24Z","lastTransitionTime":"2025-11-25T14:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.385117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.385458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.385674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.385815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.385991 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:24Z","lastTransitionTime":"2025-11-25T14:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.489788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.489852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.489877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.489908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.489931 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:24Z","lastTransitionTime":"2025-11-25T14:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.593032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.593134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.593155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.593766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.593828 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:24Z","lastTransitionTime":"2025-11-25T14:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.697245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.697300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.697318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.697340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.697357 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:24Z","lastTransitionTime":"2025-11-25T14:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.800679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.800718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.800728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.800745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.800756 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:24Z","lastTransitionTime":"2025-11-25T14:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.903651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.903695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.903707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.903724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:24 crc kubenswrapper[4796]: I1125 14:25:24.903736 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:24Z","lastTransitionTime":"2025-11-25T14:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.005928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.005979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.005990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.006008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.006021 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.018125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.018331 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.018404 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs podName:a07d588f-1940-4a4b-a4a9-94451e43ec8d nodeName:}" failed. No retries permitted until 2025-11-25 14:25:41.018386746 +0000 UTC m=+69.361496180 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs") pod "network-metrics-daemon-n4f9r" (UID: "a07d588f-1940-4a4b-a4a9-94451e43ec8d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.110029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.110107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.110130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.110159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.110185 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.212782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.212845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.212864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.212888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.212905 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.220648 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.220863 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:25:57.220833448 +0000 UTC m=+85.563942912 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.221045 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.221218 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.221330 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:57.221300362 +0000 UTC m=+85.564409876 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.314900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.314980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.315005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.315036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.315057 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.321645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.321708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.321818 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.321913 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.322036 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:57.322010206 +0000 UTC m=+85.665119660 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.322042 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.322087 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.322111 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.322043 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.322195 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:57.322164171 +0000 UTC m=+85.665273635 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.322196 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.322263 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.322324 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:25:57.322306005 +0000 UTC m=+85.665415469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.409004 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.409106 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.409100 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.409004 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.409283 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.409518 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.409630 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:25 crc kubenswrapper[4796]: E1125 14:25:25.409752 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.416795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.416824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.416835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.416854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.416868 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.519964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.520040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.520050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.520070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.520090 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.624035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.624128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.624169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.624201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.624222 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.727623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.727701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.727727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.727761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.727783 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.831300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.831365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.831385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.831414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.831433 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.933919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.933982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.933999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.934024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:25 crc kubenswrapper[4796]: I1125 14:25:25.934045 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:25Z","lastTransitionTime":"2025-11-25T14:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.036543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.036589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.036600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.036616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.036628 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.139868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.139928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.139945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.139967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.139982 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.243049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.243111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.243131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.243161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.243183 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.346326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.346394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.346418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.346453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.346478 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.448534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.448596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.448609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.448627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.448638 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.551476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.551613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.551642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.551686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.551715 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.654915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.654995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.655011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.655070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.655088 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.758744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.758827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.758855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.758886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.758909 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.862645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.862751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.862810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.862844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.862898 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.961343 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.965907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.965957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.965971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.966051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.966087 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:26Z","lastTransitionTime":"2025-11-25T14:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.977342 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:26Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:26 crc kubenswrapper[4796]: I1125 14:25:26.987488 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:26Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.003101 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.017642 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.034434 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.050544 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.065769 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.068929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.068968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.068980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.068995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.069008 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.091567 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.111136 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.127654 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.145233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.165500 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.171407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.171439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.171447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.171464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.171476 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.183044 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.199654 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.216823 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.231885 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.274308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.274358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.274369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.274385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.274397 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.377892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.377944 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.377961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.377982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.377999 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.409121 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:27 crc kubenswrapper[4796]: E1125 14:25:27.409277 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.409364 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:27 crc kubenswrapper[4796]: E1125 14:25:27.409453 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.409524 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:27 crc kubenswrapper[4796]: E1125 14:25:27.409646 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.409723 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:27 crc kubenswrapper[4796]: E1125 14:25:27.409827 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.481453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.481527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.481552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.481623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.481668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.584688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.585028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.585161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.585288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.586056 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.689523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.689616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.689634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.689659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.689675 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.793205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.793269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.793282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.793295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.793319 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.896498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.896570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.896624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.896653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.896674 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.999833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.999883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:27 crc kubenswrapper[4796]: I1125 14:25:27.999902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:27.999924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:27.999941 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:27Z","lastTransitionTime":"2025-11-25T14:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.038263 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.051392 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.062250 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.080347 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.101321 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.102884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.103055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.103204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.103357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.103496 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:28Z","lastTransitionTime":"2025-11-25T14:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.119761 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.142999 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.163079 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.181364 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.207296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.207360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.207383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.207414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.207436 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:28Z","lastTransitionTime":"2025-11-25T14:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.214608 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.236456 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.261296 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.285012 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.304526 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.310320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.310376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.310390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.310407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.310422 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:28Z","lastTransitionTime":"2025-11-25T14:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.321729 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.344150 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.367682 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.383453 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.412402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.412477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.412490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.412524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.412538 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:28Z","lastTransitionTime":"2025-11-25T14:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.515612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.515658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.515668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.515687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.515698 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:28Z","lastTransitionTime":"2025-11-25T14:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.618701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.618754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.618764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.618783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.618794 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:28Z","lastTransitionTime":"2025-11-25T14:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.721820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.721868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.721880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.721896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.721908 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:28Z","lastTransitionTime":"2025-11-25T14:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.825088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.825150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.825173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.825203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.825226 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:28Z","lastTransitionTime":"2025-11-25T14:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.928337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.928403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.928421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.928451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:28 crc kubenswrapper[4796]: I1125 14:25:28.928471 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:28Z","lastTransitionTime":"2025-11-25T14:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.031629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.031983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.032114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.032265 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.032413 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.135087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.135732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.135828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.135924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.136003 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.238984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.239042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.239060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.239084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.239102 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.341926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.341969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.341980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.341998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.342010 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.408398 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.408647 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:29 crc kubenswrapper[4796]: E1125 14:25:29.408749 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.408795 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.408865 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:29 crc kubenswrapper[4796]: E1125 14:25:29.409076 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:29 crc kubenswrapper[4796]: E1125 14:25:29.409159 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:29 crc kubenswrapper[4796]: E1125 14:25:29.409356 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.444814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.444873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.444894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.444919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.444954 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.547513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.547567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.547597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.547618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.547631 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.651012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.651086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.651103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.651126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.651145 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.754455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.754502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.754512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.754530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.754542 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.858037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.858099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.858116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.858140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.858158 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.960783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.960826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.960837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.960854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:29 crc kubenswrapper[4796]: I1125 14:25:29.960871 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:29Z","lastTransitionTime":"2025-11-25T14:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.063762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.064177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.064379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.064622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.064858 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.167377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.167656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.167751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.167832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.167906 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.271307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.271683 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.271796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.271904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.271992 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.374565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.374857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.374935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.375024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.375109 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.478233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.478868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.478963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.479050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.479140 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.501083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.501166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.501185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.501212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.501228 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: E1125 14:25:30.517614 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.520733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.520772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.520780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.520795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.520803 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: E1125 14:25:30.532996 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.623442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.623483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.623494 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.623512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.623523 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: E1125 14:25:30.635137 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.638427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.638464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.638475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.638493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.638505 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: E1125 14:25:30.649146 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.652249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.652284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.652298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.652316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.652329 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: E1125 14:25:30.663736 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:30 crc kubenswrapper[4796]: E1125 14:25:30.663855 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.665644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.665679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.665690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.665707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.665718 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.767659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.767694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.767703 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.767735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.767746 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.869905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.869963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.869980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.870002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.870020 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.973067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.973122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.973134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.973154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:30 crc kubenswrapper[4796]: I1125 14:25:30.973166 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:30Z","lastTransitionTime":"2025-11-25T14:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.076532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.076608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.076622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.076642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.076655 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:31Z","lastTransitionTime":"2025-11-25T14:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.179714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.179761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.179773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.179789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.179800 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:31Z","lastTransitionTime":"2025-11-25T14:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.283079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.283118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.283158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.283389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.283417 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:31Z","lastTransitionTime":"2025-11-25T14:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.386101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.386148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.386160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.386177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.386188 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:31Z","lastTransitionTime":"2025-11-25T14:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.408817 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.408860 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.408890 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.408915 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:31 crc kubenswrapper[4796]: E1125 14:25:31.409046 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:31 crc kubenswrapper[4796]: E1125 14:25:31.409135 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:31 crc kubenswrapper[4796]: E1125 14:25:31.409223 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:31 crc kubenswrapper[4796]: E1125 14:25:31.409287 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.488887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.488973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.488995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.489026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.489045 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:31Z","lastTransitionTime":"2025-11-25T14:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.592761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.592843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.592869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.592899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.592918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:31Z","lastTransitionTime":"2025-11-25T14:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.696851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.696905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.696918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.696938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.696954 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:31Z","lastTransitionTime":"2025-11-25T14:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.799989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.800050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.800068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.800092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.800110 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:31Z","lastTransitionTime":"2025-11-25T14:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.903816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.903880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.903902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.903938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:31 crc kubenswrapper[4796]: I1125 14:25:31.903962 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:31Z","lastTransitionTime":"2025-11-25T14:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.006352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.006409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.006419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.006438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.006449 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.109767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.109835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.109853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.109878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.109895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.212946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.213016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.213033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.213062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.213079 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.315343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.315403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.315614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.315638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.315918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.419396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.419449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.419469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.419509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.419529 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.436228 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.455069 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.469641 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.493910 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.512965 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.521606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.521651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.521663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.521683 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.521697 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.529713 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.549401 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.566250 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.584108 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.598057 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.614711 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.627715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.627754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.627766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.627782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.627794 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.628330 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.643862 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.659318 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.680623 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.696277 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.713838 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.729925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.729995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.730007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.730023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.730060 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.832898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.832963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.832981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.833001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.833013 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.936704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.936850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.936878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.936918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:32 crc kubenswrapper[4796]: I1125 14:25:32.936941 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:32Z","lastTransitionTime":"2025-11-25T14:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.039957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.040021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.040043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.040075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.040099 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.142659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.142719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.142730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.142749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.142762 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.248251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.248365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.248397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.248439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.248481 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.352266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.352329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.352342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.352363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.352378 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.409168 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.409288 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.409202 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:33 crc kubenswrapper[4796]: E1125 14:25:33.409412 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:33 crc kubenswrapper[4796]: E1125 14:25:33.409658 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:33 crc kubenswrapper[4796]: E1125 14:25:33.409806 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.409854 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:33 crc kubenswrapper[4796]: E1125 14:25:33.409997 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.455688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.455757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.455791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.455815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.455833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.558259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.558313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.558329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.558351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.558370 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.661776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.661841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.661853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.661873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.661887 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.764684 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.764730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.764741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.764759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.764771 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.867051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.867135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.867158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.867509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.867530 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.970358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.970411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.970424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.970439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:33 crc kubenswrapper[4796]: I1125 14:25:33.970452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:33Z","lastTransitionTime":"2025-11-25T14:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.073480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.073630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.073660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.073692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.073714 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:34Z","lastTransitionTime":"2025-11-25T14:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.177257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.177321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.177343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.177371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.177391 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:34Z","lastTransitionTime":"2025-11-25T14:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.280318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.280365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.280381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.280463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.280484 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:34Z","lastTransitionTime":"2025-11-25T14:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.383482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.383547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.383561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.383600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.383614 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:34Z","lastTransitionTime":"2025-11-25T14:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.486087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.486139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.486152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.486169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.486182 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:34Z","lastTransitionTime":"2025-11-25T14:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.589475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.589533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.589549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.589597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.589616 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:34Z","lastTransitionTime":"2025-11-25T14:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.691860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.691920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.691936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.691965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.691982 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:34Z","lastTransitionTime":"2025-11-25T14:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.795315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.795377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.795396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.795421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.795438 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:34Z","lastTransitionTime":"2025-11-25T14:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.898096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.898229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.898311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.898400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:34 crc kubenswrapper[4796]: I1125 14:25:34.898426 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:34Z","lastTransitionTime":"2025-11-25T14:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.002112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.002174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.002195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.002223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.002243 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.105785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.105880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.105905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.105943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.105967 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.210373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.210518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.210546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.210642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.210675 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.313942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.314007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.314020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.314037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.314049 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.408860 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.409018 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:35 crc kubenswrapper[4796]: E1125 14:25:35.409065 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.409131 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.409214 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:35 crc kubenswrapper[4796]: E1125 14:25:35.409226 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:35 crc kubenswrapper[4796]: E1125 14:25:35.409344 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:35 crc kubenswrapper[4796]: E1125 14:25:35.410702 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.421288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.421345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.421362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.421386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.421406 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.525854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.526131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.526223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.526295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.526374 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.628777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.629534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.629716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.629844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.629955 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.733172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.733276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.733300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.733337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.733362 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.836288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.836364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.836385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.836416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.836438 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.939610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.939675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.939689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.939737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:35 crc kubenswrapper[4796]: I1125 14:25:35.939751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:35Z","lastTransitionTime":"2025-11-25T14:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.043151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.043254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.043282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.043313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.043338 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.146266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.146319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.146331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.146349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.146365 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.249476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.249531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.249548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.249596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.249618 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.352260 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.352294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.352301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.352314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.352325 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.409558 4796 scope.go:117] "RemoveContainer" containerID="bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1" Nov 25 14:25:36 crc kubenswrapper[4796]: E1125 14:25:36.409712 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.454433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.454487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.454506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.454528 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.454545 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.557939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.557988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.558001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.558018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.558031 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.660476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.660549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.660669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.660704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.660726 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.763291 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.763386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.763407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.763431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.763448 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.865488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.865525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.865533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.865563 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.865589 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.968257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.968296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.968308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.968326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:36 crc kubenswrapper[4796]: I1125 14:25:36.968338 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:36Z","lastTransitionTime":"2025-11-25T14:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.071207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.071249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.071261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.071279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.071291 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.175305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.175370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.175388 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.175414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.175436 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.278517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.278559 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.278626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.278643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.278652 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.381907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.381963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.381976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.382002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.382016 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.408430 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.408508 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.408521 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:37 crc kubenswrapper[4796]: E1125 14:25:37.408719 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.408752 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:37 crc kubenswrapper[4796]: E1125 14:25:37.409016 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:37 crc kubenswrapper[4796]: E1125 14:25:37.409098 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:37 crc kubenswrapper[4796]: E1125 14:25:37.409151 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.484340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.484379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.484387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.484400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.484409 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.586808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.586860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.586875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.586892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.586904 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.690300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.690340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.690349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.690363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.690377 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.792795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.792826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.792839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.792854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.792862 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.895035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.895072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.895081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.895095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.895105 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.997088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.997123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.997132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.997149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:37 crc kubenswrapper[4796]: I1125 14:25:37.997158 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:37Z","lastTransitionTime":"2025-11-25T14:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.101769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.101812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.101828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.101852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.101869 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:38Z","lastTransitionTime":"2025-11-25T14:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.204276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.204325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.204342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.204364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.204398 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:38Z","lastTransitionTime":"2025-11-25T14:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.306938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.306979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.306995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.307014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.307029 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:38Z","lastTransitionTime":"2025-11-25T14:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.410801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.410846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.410868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.410895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.410917 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:38Z","lastTransitionTime":"2025-11-25T14:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.512608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.512646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.512656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.512671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.512680 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:38Z","lastTransitionTime":"2025-11-25T14:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.615754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.615818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.615843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.615868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.615890 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:38Z","lastTransitionTime":"2025-11-25T14:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.719314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.719674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.719691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.719716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.719732 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:38Z","lastTransitionTime":"2025-11-25T14:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.821920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.821958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.821970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.821985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.821995 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:38Z","lastTransitionTime":"2025-11-25T14:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.924425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.924470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.924482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.924499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:38 crc kubenswrapper[4796]: I1125 14:25:38.924511 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:38Z","lastTransitionTime":"2025-11-25T14:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.027365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.027412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.027427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.027447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.027461 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.130539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.130598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.130610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.130627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.130640 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.233278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.233305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.233314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.233326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.233334 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.335334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.335435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.335461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.335494 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.335513 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.408926 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.409119 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:39 crc kubenswrapper[4796]: E1125 14:25:39.409116 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.409157 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.409183 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:39 crc kubenswrapper[4796]: E1125 14:25:39.409299 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:39 crc kubenswrapper[4796]: E1125 14:25:39.409648 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:39 crc kubenswrapper[4796]: E1125 14:25:39.410335 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.437755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.437788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.437796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.437809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.437817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.546806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.546847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.546859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.546875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.546886 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.650146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.650472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.650725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.650939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.651315 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.754342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.754402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.754425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.754455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.754477 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.856942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.857318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.857493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.857695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.857848 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.960691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.960983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.961091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.961155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:39 crc kubenswrapper[4796]: I1125 14:25:39.961225 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:39Z","lastTransitionTime":"2025-11-25T14:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.063916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.063964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.063982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.064007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.064025 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.167215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.167464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.167561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.167659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.167734 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.270140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.270203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.270218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.270241 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.270257 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.372368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.372560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.372694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.372783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.372962 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.475211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.475400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.475470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.475536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.475617 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.578549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.578631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.578642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.578665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.578678 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.680304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.680343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.680353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.680369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.680379 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.783076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.783123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.783137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.783153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.783167 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.886396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.886431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.886443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.886460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.886472 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.933115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.933165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.933176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.933190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.933198 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: E1125 14:25:40.945888 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.949618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.949653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.949663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.949678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.949688 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: E1125 14:25:40.962387 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.966501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.966550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.966562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.966600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.966612 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:40 crc kubenswrapper[4796]: E1125 14:25:40.982763 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.987034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.987063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.987076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.987090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:40 crc kubenswrapper[4796]: I1125 14:25:40.987098 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:40Z","lastTransitionTime":"2025-11-25T14:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: E1125 14:25:41.000064 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.003866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.003901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.003912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.003923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.003931 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: E1125 14:25:41.019917 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:41 crc kubenswrapper[4796]: E1125 14:25:41.020022 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.021397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.021430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.021443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.021456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.021470 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.038135 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:41 crc kubenswrapper[4796]: E1125 14:25:41.038318 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:41 crc kubenswrapper[4796]: E1125 14:25:41.038421 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs podName:a07d588f-1940-4a4b-a4a9-94451e43ec8d nodeName:}" failed. No retries permitted until 2025-11-25 14:26:13.038394395 +0000 UTC m=+101.381503859 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs") pod "network-metrics-daemon-n4f9r" (UID: "a07d588f-1940-4a4b-a4a9-94451e43ec8d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.124834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.124888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.124907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.124933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.124954 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.228686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.228751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.228776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.228809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.228833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.331524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.331588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.331599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.331617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.331629 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.408670 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.408749 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.408753 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.408703 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:41 crc kubenswrapper[4796]: E1125 14:25:41.408846 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:41 crc kubenswrapper[4796]: E1125 14:25:41.409030 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:41 crc kubenswrapper[4796]: E1125 14:25:41.409160 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:41 crc kubenswrapper[4796]: E1125 14:25:41.409257 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.434225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.434291 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.434313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.434345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.434368 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.536355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.536392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.536408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.536429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.536490 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.638570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.638661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.638681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.638705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.638725 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.741282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.741339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.741352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.741371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.741383 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.843663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.843732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.843753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.843779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.843797 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.946010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.946054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.946065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.946080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:41 crc kubenswrapper[4796]: I1125 14:25:41.946093 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:41Z","lastTransitionTime":"2025-11-25T14:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.048690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.048734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.048745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.048760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.048771 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.151120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.151158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.151175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.151197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.151214 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.252986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.253041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.253053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.253071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.253083 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.355869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.355919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.355937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.355963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.355980 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.422667 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.434418 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.444166 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.455179 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.458296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.458350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.458367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.458391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.458414 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.466046 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.479922 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.492002 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.504905 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.515713 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.531762 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.546555 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.560262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.560304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.560315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.560329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.560340 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.565744 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.579672 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.596419 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.614858 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.626195 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.648530 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.663511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.663597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.663617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.663638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.663656 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.766107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.766310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.766378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.766449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.766544 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.869030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.869075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.869084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.869099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.869108 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.971370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.971415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.971427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.971445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:42 crc kubenswrapper[4796]: I1125 14:25:42.971458 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:42Z","lastTransitionTime":"2025-11-25T14:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.073748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.073792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.073803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.073820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.073833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.176117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.176160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.176173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.176187 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.176198 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.278537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.278601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.278611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.278624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.278633 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.381293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.381330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.381380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.381403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.381421 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.408852 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.408856 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:43 crc kubenswrapper[4796]: E1125 14:25:43.409031 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.408863 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.408862 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:43 crc kubenswrapper[4796]: E1125 14:25:43.409144 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:43 crc kubenswrapper[4796]: E1125 14:25:43.409221 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:43 crc kubenswrapper[4796]: E1125 14:25:43.409343 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.483276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.483308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.483317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.483331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.483340 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.585838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.585871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.585883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.585897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.585908 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.687953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.688030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.688054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.688086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.688110 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.791094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.791141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.791153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.791179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.791193 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.894987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.895023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.895031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.895046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.895059 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.997932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.998019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.998036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.998053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:43 crc kubenswrapper[4796]: I1125 14:25:43.998066 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:43Z","lastTransitionTime":"2025-11-25T14:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.101483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.101549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.101593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.101617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.101633 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:44Z","lastTransitionTime":"2025-11-25T14:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.204571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.204629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.204648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.204674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.204691 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:44Z","lastTransitionTime":"2025-11-25T14:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.307080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.307124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.307138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.307155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.307167 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:44Z","lastTransitionTime":"2025-11-25T14:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.416764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.416817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.416830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.416848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.416867 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:44Z","lastTransitionTime":"2025-11-25T14:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.519790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.519852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.519871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.519894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.519911 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:44Z","lastTransitionTime":"2025-11-25T14:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.622714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.622768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.622779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.622794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.622805 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:44Z","lastTransitionTime":"2025-11-25T14:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.725181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.725232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.725244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.725260 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.725271 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:44Z","lastTransitionTime":"2025-11-25T14:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.828522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.828625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.828646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.829106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.829164 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:44Z","lastTransitionTime":"2025-11-25T14:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.914693 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ch8mf_7e00ee09-b0b0-4ae8-a51d-cc11fb99679b/kube-multus/0.log" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.914774 4796 generic.go:334] "Generic (PLEG): container finished" podID="7e00ee09-b0b0-4ae8-a51d-cc11fb99679b" containerID="66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1" exitCode=1 Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.914823 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ch8mf" event={"ID":"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b","Type":"ContainerDied","Data":"66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.915423 4796 scope.go:117] "RemoveContainer" containerID="66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.931807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.931927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.932021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.932112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.932154 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:44Z","lastTransitionTime":"2025-11-25T14:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.934930 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:44Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.953674 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:44Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.971728 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:44Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:44 crc kubenswrapper[4796]: I1125 14:25:44.992483 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:44Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.013829 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.030632 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.035925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.035968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.035983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.036004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.036019 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.042191 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.066521 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.081470 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.093757 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.105058 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.120961 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.135924 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.137970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.138005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.138016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.138032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.138043 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.150846 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.166596 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"2025-11-25T14:24:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76\\\\n2025-11-25T14:24:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76 to /host/opt/cni/bin/\\\\n2025-11-25T14:24:58Z [verbose] multus-daemon started\\\\n2025-11-25T14:24:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:25:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.179742 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.191201 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.240965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.240998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.241009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.241029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.241045 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.343805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.343841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.343853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.343869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.343881 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.408753 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.408853 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.408893 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.409042 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:45 crc kubenswrapper[4796]: E1125 14:25:45.409218 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:45 crc kubenswrapper[4796]: E1125 14:25:45.409315 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:45 crc kubenswrapper[4796]: E1125 14:25:45.409369 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:45 crc kubenswrapper[4796]: E1125 14:25:45.409914 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.446258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.446301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.446313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.446329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.446341 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.548205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.548299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.548318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.548397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.548422 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.650779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.650852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.650875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.650904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.650921 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.753641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.753697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.753710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.753728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.753739 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.856529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.856618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.856682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.856717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.856736 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.921619 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ch8mf_7e00ee09-b0b0-4ae8-a51d-cc11fb99679b/kube-multus/0.log" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.921738 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ch8mf" event={"ID":"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b","Type":"ContainerStarted","Data":"3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.941642 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.957926 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.965943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.965977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.965986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.965999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.966009 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:45Z","lastTransitionTime":"2025-11-25T14:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:45 crc kubenswrapper[4796]: I1125 14:25:45.983128 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:45Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.004438 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.022215 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.042007 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.059091 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.067709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.067735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.067743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.067757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.067766 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.074261 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.091463 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"2025-11-25T14:24:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76\\\\n2025-11-25T14:24:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76 to /host/opt/cni/bin/\\\\n2025-11-25T14:24:58Z [verbose] multus-daemon started\\\\n2025-11-25T14:24:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:25:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.103987 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.115964 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.129120 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.147398 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.165125 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.170056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.170092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.170105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.170123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.170135 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.177740 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.192039 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.208780 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.272961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.272998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.273010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.273026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.273037 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.376350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.376430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.376443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.376470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.376481 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.478654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.478687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.478700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.478716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.478727 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.582283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.582393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.582414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.582446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.582467 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.684976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.685024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.685038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.685063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.685086 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.787943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.787989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.788005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.788027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.788044 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.891949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.892014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.892031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.892058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.892080 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.995418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.995459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.995471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.995487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:46 crc kubenswrapper[4796]: I1125 14:25:46.995500 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:46Z","lastTransitionTime":"2025-11-25T14:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.099033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.099081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.099105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.099134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.099154 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:47Z","lastTransitionTime":"2025-11-25T14:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.201924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.201988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.202011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.202041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.202063 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:47Z","lastTransitionTime":"2025-11-25T14:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.305359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.305395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.305407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.305424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.305436 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:47Z","lastTransitionTime":"2025-11-25T14:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.408464 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.408480 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.408480 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.408552 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:47 crc kubenswrapper[4796]: E1125 14:25:47.408720 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:47 crc kubenswrapper[4796]: E1125 14:25:47.408777 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:47 crc kubenswrapper[4796]: E1125 14:25:47.408868 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.409118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.409144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.409157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.409171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.409182 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:47Z","lastTransitionTime":"2025-11-25T14:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:47 crc kubenswrapper[4796]: E1125 14:25:47.409678 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.511372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.511412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.511423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.511441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.511455 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:47Z","lastTransitionTime":"2025-11-25T14:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.613895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.613935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.613944 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.613958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.613971 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:47Z","lastTransitionTime":"2025-11-25T14:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.716086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.716127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.716137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.716152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.716162 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:47Z","lastTransitionTime":"2025-11-25T14:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.819276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.819345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.819364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.819391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.819408 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:47Z","lastTransitionTime":"2025-11-25T14:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.921763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.921829 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.921847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.921870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:47 crc kubenswrapper[4796]: I1125 14:25:47.921902 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:47Z","lastTransitionTime":"2025-11-25T14:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.024977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.025044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.025062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.025086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.025103 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.128349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.128392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.128405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.128421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.128432 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.230653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.230696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.230707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.230723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.230735 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.333283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.333315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.333325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.333339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.333349 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.409283 4796 scope.go:117] "RemoveContainer" containerID="bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.436413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.436450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.436459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.436475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.436488 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.539419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.539481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.539500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.539561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.539620 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.643418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.643511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.643531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.643557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.643604 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.746131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.746171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.746188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.746202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.746212 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.848543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.848601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.848614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.848629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.848640 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.937828 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/2.log" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.940616 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.941369 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.941816 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.951590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.951638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.951649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.951666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.951677 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:48Z","lastTransitionTime":"2025-11-25T14:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.955409 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.974476 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:48 crc kubenswrapper[4796]: I1125 14:25:48.996044 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.013452 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.028223 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.049826 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.054172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.054218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.054229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.054246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.054259 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.067390 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.080909 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.096368 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"2025-11-25T14:24:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76\\\\n2025-11-25T14:24:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76 to /host/opt/cni/bin/\\\\n2025-11-25T14:24:58Z [verbose] multus-daemon started\\\\n2025-11-25T14:24:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:25:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.110009 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.123832 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.139998 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.157820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.157889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.157905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.157934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.157956 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.159170 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.173476 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.188928 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.204276 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.218025 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.261491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.261546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.261557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.261589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.261603 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.364606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.364658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.364668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.364700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.364710 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.408537 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.408550 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.408673 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:49 crc kubenswrapper[4796]: E1125 14:25:49.408852 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.409125 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:49 crc kubenswrapper[4796]: E1125 14:25:49.409375 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:49 crc kubenswrapper[4796]: E1125 14:25:49.409522 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:49 crc kubenswrapper[4796]: E1125 14:25:49.409696 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.467914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.467988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.468002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.468028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.468044 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.570826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.571224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.571369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.571450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.571512 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.674903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.674957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.674967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.674986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.674997 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.779085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.779196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.779226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.779261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.779286 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.881964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.882080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.882117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.882152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.882175 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.947057 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/3.log" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.947785 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/2.log" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.950279 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.950876 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" exitCode=1 Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.950908 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.950939 4796 scope.go:117] "RemoveContainer" containerID="bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.952056 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:25:49 crc kubenswrapper[4796]: E1125 14:25:49.952352 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.969050 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.980696 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:49Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.984682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.984714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.984723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.984736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:49 crc kubenswrapper[4796]: I1125 14:25:49.984745 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:49Z","lastTransitionTime":"2025-11-25T14:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.006316 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd43b4f44f69a031806b7353000dd1c401199ca063f21fed573a9f774456cbb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:21Z\\\",\\\"message\\\":\\\"479 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:25:21.321197 6479 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:25:21.321230 6479 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:25:21.321244 6479 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:25:21.321289 6479 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:25:21.321302 6479 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:25:21.322883 6479 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1125 14:25:21.322905 6479 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 14:25:21.322934 6479 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:21.322947 6479 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:25:21.322999 6479 factory.go:656] Stopping watch factory\\\\nI1125 14:25:21.323026 6479 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:21.323059 6479 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:25:21.323078 6479 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:21.323091 6479 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:25:21.323107 6479 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:21.323204 6479 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:49Z\\\",\\\"message\\\":\\\".go:160\\\\nI1125 14:25:49.293527 6841 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:25:49.293774 6841 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.293984 6841 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.303338 6841 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:49.303402 6841 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:49.303449 6841 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:49.303482 6841 factory.go:656] Stopping watch factory\\\\nI1125 14:25:49.303515 6841 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:49.305989 6841 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 14:25:49.306025 6841 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 14:25:49.306140 6841 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:49.306223 6841 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:49.306346 6841 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.024066 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.043764 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.063015 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.081915 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.087722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.087770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.087785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.087806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.087821 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:50Z","lastTransitionTime":"2025-11-25T14:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.095672 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.112037 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.129529 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"2025-11-25T14:24:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76\\\\n2025-11-25T14:24:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76 to /host/opt/cni/bin/\\\\n2025-11-25T14:24:58Z [verbose] multus-daemon started\\\\n2025-11-25T14:24:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:25:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.143948 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.158642 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.174564 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.191956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.192006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.192020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.192037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.192050 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:50Z","lastTransitionTime":"2025-11-25T14:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.194788 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.208252 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.222931 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.237049 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.295602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.295648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.295664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.295684 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.295700 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:50Z","lastTransitionTime":"2025-11-25T14:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.398753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.398796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.398809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.398826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.398839 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:50Z","lastTransitionTime":"2025-11-25T14:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.501065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.501158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.501167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.501187 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.501200 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:50Z","lastTransitionTime":"2025-11-25T14:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.604202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.604253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.604265 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.604284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.604296 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:50Z","lastTransitionTime":"2025-11-25T14:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.707750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.707824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.707842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.707868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.707889 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:50Z","lastTransitionTime":"2025-11-25T14:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.811396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.811458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.811477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.811501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.811518 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:50Z","lastTransitionTime":"2025-11-25T14:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.913816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.913888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.913906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.913929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.913947 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:50Z","lastTransitionTime":"2025-11-25T14:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.957428 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/3.log" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.960883 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.963199 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:25:50 crc kubenswrapper[4796]: E1125 14:25:50.963489 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" Nov 25 14:25:50 crc kubenswrapper[4796]: I1125 14:25:50.983718 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.003216 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:50Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.019569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.020007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.020243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.020448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.020660 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.024542 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.031471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.031525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.031618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.031645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.031663 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.040948 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.048545 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.054188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.054275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.054292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.054314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.054363 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.058631 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"2025-11-25T14:24:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76\\\\n2025-11-25T14:24:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76 to /host/opt/cni/bin/\\\\n2025-11-25T14:24:58Z [verbose] multus-daemon started\\\\n2025-11-25T14:24:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:25:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.071537 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.071238 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.075733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.075787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.075799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.075816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.075829 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.086400 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.088756 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.094358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.094430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.094446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.094871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.094890 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.096184 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.105280 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.108095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.108124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.108134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.108150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.108162 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.118185 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.120903 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.121281 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.123095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.123124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.123134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.123149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.123159 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.128979 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.138764 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.151159 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.165507 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.178029 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.196132 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:49Z\\\",\\\"message\\\":\\\".go:160\\\\nI1125 14:25:49.293527 6841 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:25:49.293774 6841 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.293984 6841 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.303338 6841 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:49.303402 6841 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:49.303449 6841 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:49.303482 6841 factory.go:656] Stopping watch factory\\\\nI1125 14:25:49.303515 6841 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:49.305989 6841 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 14:25:49.306025 6841 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 14:25:49.306140 6841 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:49.306223 6841 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:49.306346 6841 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.207122 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.217016 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:51Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.225935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.225974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.225990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.226010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.226024 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.329673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.329743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.329769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.329800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.329822 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.408849 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.408855 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.408912 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.408932 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.409855 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.409603 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.409547 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:51 crc kubenswrapper[4796]: E1125 14:25:51.409983 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.432966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.433068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.433095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.433123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.433149 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.535383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.535624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.535758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.535824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.535887 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.638664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.638922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.639001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.639124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.639209 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.741694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.741746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.741758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.741774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.741788 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.844638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.844736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.844775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.844807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.844833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.947728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.947789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.947806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.947833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:51 crc kubenswrapper[4796]: I1125 14:25:51.947850 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:51Z","lastTransitionTime":"2025-11-25T14:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.050929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.050964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.050976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.051006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.051018 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.153747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.153843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.153900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.153924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.153940 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.256512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.256559 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.256599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.256622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.256639 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.358892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.358928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.358942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.358961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.358975 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.423122 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.439256 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.451853 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.462568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.462653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.462670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.462694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.462715 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.472537 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:49Z\\\",\\\"message\\\":\\\".go:160\\\\nI1125 14:25:49.293527 6841 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:25:49.293774 6841 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.293984 6841 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.303338 6841 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:49.303402 6841 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:49.303449 6841 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:49.303482 6841 factory.go:656] Stopping watch factory\\\\nI1125 14:25:49.303515 6841 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:49.305989 6841 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 14:25:49.306025 6841 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 14:25:49.306140 6841 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:49.306223 6841 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:49.306346 6841 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.492168 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.512248 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"2025-11-25T14:24:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76\\\\n2025-11-25T14:24:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76 to /host/opt/cni/bin/\\\\n2025-11-25T14:24:58Z [verbose] multus-daemon started\\\\n2025-11-25T14:24:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:25:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.530524 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.545761 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.560238 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.564660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.564706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.564720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.564741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.564755 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.574127 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.588805 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.602150 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.620604 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.637209 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.659608 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.667210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.667246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.667262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.667278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.667289 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.674336 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.689448 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:25:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.770085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.770157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.770175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.770201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.770220 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.873003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.873071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.873093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.873122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.873145 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.975115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.975152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.975163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.975179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:52 crc kubenswrapper[4796]: I1125 14:25:52.975190 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:52Z","lastTransitionTime":"2025-11-25T14:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.078536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.078633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.078656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.078686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.078708 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:53Z","lastTransitionTime":"2025-11-25T14:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.183255 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.183345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.183367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.183397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.183418 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:53Z","lastTransitionTime":"2025-11-25T14:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.286243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.286311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.286328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.286351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.286369 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:53Z","lastTransitionTime":"2025-11-25T14:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.389055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.389387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.389411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.389440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.389465 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:53Z","lastTransitionTime":"2025-11-25T14:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.408901 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.409018 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.408932 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.408931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:53 crc kubenswrapper[4796]: E1125 14:25:53.409132 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:53 crc kubenswrapper[4796]: E1125 14:25:53.409308 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:53 crc kubenswrapper[4796]: E1125 14:25:53.409439 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:53 crc kubenswrapper[4796]: E1125 14:25:53.409814 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.492757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.492815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.492832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.492857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.492876 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:53Z","lastTransitionTime":"2025-11-25T14:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.596008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.596072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.596088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.596111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.596131 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:53Z","lastTransitionTime":"2025-11-25T14:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.699556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.699676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.699699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.699726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.699744 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:53Z","lastTransitionTime":"2025-11-25T14:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.803471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.803516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.803533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.803557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.803607 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:53Z","lastTransitionTime":"2025-11-25T14:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.906833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.906886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.906904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.906926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:53 crc kubenswrapper[4796]: I1125 14:25:53.906944 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:53Z","lastTransitionTime":"2025-11-25T14:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.009667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.009715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.009731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.009754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.009772 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.113120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.113175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.113189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.113205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.113219 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.216691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.216738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.216748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.216766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.216778 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.319611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.319651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.319665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.319682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.319697 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.421986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.422037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.422054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.422073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.422086 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.524998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.525061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.525078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.525103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.525123 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.628510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.628616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.628637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.628663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.628681 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.731452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.731506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.731518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.731536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.731549 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.834514 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.834644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.834669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.834703 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.834729 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.938171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.938238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.938256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.938280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:54 crc kubenswrapper[4796]: I1125 14:25:54.938298 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:54Z","lastTransitionTime":"2025-11-25T14:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.041221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.041272 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.041290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.041311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.041328 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.143902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.144033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.144046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.144059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.144067 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.247621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.247699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.247724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.247753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.247776 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.350070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.350119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.350135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.350158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.350172 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.408881 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.408953 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.408881 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:55 crc kubenswrapper[4796]: E1125 14:25:55.409010 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.408898 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:55 crc kubenswrapper[4796]: E1125 14:25:55.409098 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:55 crc kubenswrapper[4796]: E1125 14:25:55.409158 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:55 crc kubenswrapper[4796]: E1125 14:25:55.409200 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.453761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.453800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.453812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.453833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.453845 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.556663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.556716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.556735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.556758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.556775 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.659231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.659370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.659399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.659431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.659454 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.763627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.763984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.764001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.764025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.764042 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.867272 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.867415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.867445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.867480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.867503 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.970256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.970307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.970324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.970348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:55 crc kubenswrapper[4796]: I1125 14:25:55.970365 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:55Z","lastTransitionTime":"2025-11-25T14:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.072931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.072978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.072989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.073002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.073011 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.176423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.176507 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.176531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.176563 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.176614 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.278957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.279024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.279035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.279047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.279058 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.381854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.381907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.381924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.381947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.381964 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.485322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.485383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.485401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.485425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.485441 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.588555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.588645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.588663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.588687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.588706 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.691252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.691312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.691328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.691350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.691371 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.793367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.793427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.793445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.793498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.793516 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.896341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.896383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.896395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.896410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.896421 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.999120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.999206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.999230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.999260 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:56 crc kubenswrapper[4796]: I1125 14:25:56.999282 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:56Z","lastTransitionTime":"2025-11-25T14:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.102052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.102105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.102113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.102128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.102137 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:57Z","lastTransitionTime":"2025-11-25T14:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.205022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.205079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.205097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.205122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.205140 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:57Z","lastTransitionTime":"2025-11-25T14:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.307401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.307451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.307463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.307480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.307493 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:57Z","lastTransitionTime":"2025-11-25T14:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.320464 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.320687 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.320655548 +0000 UTC m=+149.663764982 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.320879 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.321002 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.321067 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.32105655 +0000 UTC m=+149.664165974 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.408430 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.408497 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.408555 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.408640 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.408457 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.409017 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.409045 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.409122 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.410135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.410197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.410229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.410243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.410253 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:57Z","lastTransitionTime":"2025-11-25T14:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.421982 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.422037 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.422098 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422161 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422225 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422245 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422257 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422279 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422259 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.422237261 +0000 UTC m=+149.765346735 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422313 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422328 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422329 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.422312743 +0000 UTC m=+149.765422247 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:57 crc kubenswrapper[4796]: E1125 14:25:57.422381 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.422361155 +0000 UTC m=+149.765470639 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.513274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.513397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.513421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.513445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.513462 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:57Z","lastTransitionTime":"2025-11-25T14:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.617026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.617077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.617093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.617116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.617132 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:57Z","lastTransitionTime":"2025-11-25T14:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.719449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.719516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.719528 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.719546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.719557 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:57Z","lastTransitionTime":"2025-11-25T14:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.822852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.822903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.822931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.822948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.822958 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:57Z","lastTransitionTime":"2025-11-25T14:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.925443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.925493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.925509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.925532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:57 crc kubenswrapper[4796]: I1125 14:25:57.925548 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:57Z","lastTransitionTime":"2025-11-25T14:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.028256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.028308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.028320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.028338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.028351 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.130501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.130545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.130560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.130593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.130608 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.232695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.232729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.232741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.232758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.232770 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.335487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.335537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.335546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.335560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.335570 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.437653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.437764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.437786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.437815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.437837 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.540379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.540456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.540474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.540505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.540526 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.644432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.644520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.644531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.644551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.644565 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.747144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.747186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.747195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.747208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.747218 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.849927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.849967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.849978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.849993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.850003 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.953523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.953599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.953619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.953642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:58 crc kubenswrapper[4796]: I1125 14:25:58.953658 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:58Z","lastTransitionTime":"2025-11-25T14:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.056763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.056805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.056818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.056834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.056845 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.159923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.160003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.160026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.160058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.160082 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.262056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.262120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.262131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.262172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.262188 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.364876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.364912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.364923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.364939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.364951 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.408705 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.408801 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:25:59 crc kubenswrapper[4796]: E1125 14:25:59.408871 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.408931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:25:59 crc kubenswrapper[4796]: E1125 14:25:59.409059 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.409180 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:25:59 crc kubenswrapper[4796]: E1125 14:25:59.409451 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:25:59 crc kubenswrapper[4796]: E1125 14:25:59.409476 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.467709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.467873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.467909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.467938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.467962 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.572042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.572086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.572097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.572114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.572125 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.674749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.674791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.674801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.674816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.674828 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.777695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.777736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.777746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.777764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.777776 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.880932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.880985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.880996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.881014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.881025 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.984218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.984278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.984300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.984330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:25:59 crc kubenswrapper[4796]: I1125 14:25:59.984355 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:25:59Z","lastTransitionTime":"2025-11-25T14:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.087659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.087725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.087743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.087767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.087784 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:00Z","lastTransitionTime":"2025-11-25T14:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.190526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.190615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.190633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.190656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.190673 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:00Z","lastTransitionTime":"2025-11-25T14:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.294164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.294227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.294250 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.294282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.294309 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:00Z","lastTransitionTime":"2025-11-25T14:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.399286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.399358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.399375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.399401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.399420 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:00Z","lastTransitionTime":"2025-11-25T14:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.502905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.503258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.503270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.503289 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.503301 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:00Z","lastTransitionTime":"2025-11-25T14:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.606381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.606448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.606465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.606492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.606513 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:00Z","lastTransitionTime":"2025-11-25T14:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.709332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.709426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.709444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.709468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.709485 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:00Z","lastTransitionTime":"2025-11-25T14:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.812850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.812907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.812924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.812951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.812969 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:00Z","lastTransitionTime":"2025-11-25T14:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.916790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.916855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.916872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.916896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:00 crc kubenswrapper[4796]: I1125 14:26:00.916913 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:00Z","lastTransitionTime":"2025-11-25T14:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.019608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.019648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.019660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.019677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.019691 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.122242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.122294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.122308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.122328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.122342 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.225218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.225290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.225313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.225345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.225366 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.328164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.328229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.328253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.328278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.328296 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.344499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.344563 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.344618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.344646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.344665 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.366896 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.375715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.375785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.375802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.375827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.375844 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.393686 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.398506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.398555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.398595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.398616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.398632 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.408303 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.408481 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.408562 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.408688 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.408889 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.408921 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.409476 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.409631 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.409840 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.410072 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.415037 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.419481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.419534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.419552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.419598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.419615 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.438990 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.443423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.443467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.443484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.443506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.443525 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.460281 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:01 crc kubenswrapper[4796]: E1125 14:26:01.460495 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.462680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.462728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.462743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.462763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.462779 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.565466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.565527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.565544 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.565610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.565630 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.668649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.668693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.668709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.668734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.668750 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.771100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.771154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.771167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.771183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.771194 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.874266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.874302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.874312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.874328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.874338 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.976172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.976199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.976208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.976221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:01 crc kubenswrapper[4796]: I1125 14:26:01.976230 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:01Z","lastTransitionTime":"2025-11-25T14:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.078925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.079234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.079254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.079274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.079287 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:02Z","lastTransitionTime":"2025-11-25T14:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.182703 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.183062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.183082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.183098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.183112 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:02Z","lastTransitionTime":"2025-11-25T14:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.286542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.286631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.286650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.286675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.286696 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:02Z","lastTransitionTime":"2025-11-25T14:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.389452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.389484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.389494 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.389511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.389522 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:02Z","lastTransitionTime":"2025-11-25T14:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.428256 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.451854 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:49Z\\\",\\\"message\\\":\\\".go:160\\\\nI1125 14:25:49.293527 6841 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:25:49.293774 6841 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.293984 6841 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.303338 6841 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:49.303402 6841 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:49.303449 6841 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:49.303482 6841 factory.go:656] Stopping watch factory\\\\nI1125 14:25:49.303515 6841 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:49.305989 6841 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 14:25:49.306025 6841 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 14:25:49.306140 6841 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:49.306223 6841 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:49.306346 6841 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.470428 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.487554 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.493327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.493359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.493372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.493387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.493400 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:02Z","lastTransitionTime":"2025-11-25T14:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.506420 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.522129 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.541020 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.555340 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.568360 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"2025-11-25T14:24:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76\\\\n2025-11-25T14:24:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76 to /host/opt/cni/bin/\\\\n2025-11-25T14:24:58Z [verbose] multus-daemon started\\\\n2025-11-25T14:24:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:25:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.586865 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.596435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.596529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.596545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.596589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.596606 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:02Z","lastTransitionTime":"2025-11-25T14:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.601598 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.612845 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.629804 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.641698 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.656696 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.676026 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.692558 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.698846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.698915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.698928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.698970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.698985 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:02Z","lastTransitionTime":"2025-11-25T14:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.802630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.802694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.802709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.802735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.802748 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:02Z","lastTransitionTime":"2025-11-25T14:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.905167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.905236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.905256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.905284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:02 crc kubenswrapper[4796]: I1125 14:26:02.905303 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:02Z","lastTransitionTime":"2025-11-25T14:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.009146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.009209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.009225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.009252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.009270 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.111988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.112081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.112095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.112116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.112132 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.215173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.215234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.215245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.215262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.215274 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.317827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.317869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.317880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.317896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.317908 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.408931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:03 crc kubenswrapper[4796]: E1125 14:26:03.409060 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.408953 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.408931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.409167 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:03 crc kubenswrapper[4796]: E1125 14:26:03.409140 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:03 crc kubenswrapper[4796]: E1125 14:26:03.409383 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:03 crc kubenswrapper[4796]: E1125 14:26:03.409751 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.420689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.420736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.420749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.420766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.420779 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.523453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.523487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.523497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.523512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.523522 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.627334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.627397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.627410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.627426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.627437 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.730773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.730840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.730859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.730883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.730903 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.833933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.833995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.834013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.834039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.834057 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.936478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.936541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.936565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.936632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:03 crc kubenswrapper[4796]: I1125 14:26:03.936655 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:03Z","lastTransitionTime":"2025-11-25T14:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.040082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.040148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.040161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.040177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.040188 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.143412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.143481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.143498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.143525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.143542 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.245984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.246030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.246047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.246072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.246088 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.348256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.348330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.348354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.348389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.348413 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.451778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.451820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.451830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.451848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.451860 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.554992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.555124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.555146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.555169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.555186 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.657735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.657824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.657849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.658223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.658242 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.761805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.761903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.761920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.761943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.761963 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.865280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.865327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.865339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.865357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.865368 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.968612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.968671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.968697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.968724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:04 crc kubenswrapper[4796]: I1125 14:26:04.968745 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:04Z","lastTransitionTime":"2025-11-25T14:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.071346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.071382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.071395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.071409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.071442 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.174522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.174560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.174569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.174595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.174605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.277740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.277803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.277823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.277847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.277866 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.380976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.381042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.381064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.381092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.381114 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.408892 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.408931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:05 crc kubenswrapper[4796]: E1125 14:26:05.409077 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.409322 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:05 crc kubenswrapper[4796]: E1125 14:26:05.409438 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.409657 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:05 crc kubenswrapper[4796]: E1125 14:26:05.409865 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:05 crc kubenswrapper[4796]: E1125 14:26:05.410057 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.483783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.483822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.483833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.483850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.483863 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.586844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.586917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.586942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.586970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.586993 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.689860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.689933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.689954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.689987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.690008 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.793508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.793566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.793614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.793640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.793657 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.896866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.896940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.896965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.896994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.897016 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.999867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.999910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.999921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.999941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:05 crc kubenswrapper[4796]: I1125 14:26:05.999953 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:05Z","lastTransitionTime":"2025-11-25T14:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.103111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.103153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.103164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.103178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.103190 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:06Z","lastTransitionTime":"2025-11-25T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.206308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.206363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.206385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.206409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.206427 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:06Z","lastTransitionTime":"2025-11-25T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.313773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.313820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.313832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.313849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.313861 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:06Z","lastTransitionTime":"2025-11-25T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.415772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.415843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.415886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.415915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.415937 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:06Z","lastTransitionTime":"2025-11-25T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.424871 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.519207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.519277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.519299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.519330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.519355 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:06Z","lastTransitionTime":"2025-11-25T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.622515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.622627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.622653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.622676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.622695 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:06Z","lastTransitionTime":"2025-11-25T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.725499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.725567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.725629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.725658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.725679 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:06Z","lastTransitionTime":"2025-11-25T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.829214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.829265 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.829274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.829292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.829301 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:06Z","lastTransitionTime":"2025-11-25T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.932413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.932475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.932493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.932520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:06 crc kubenswrapper[4796]: I1125 14:26:06.932567 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:06Z","lastTransitionTime":"2025-11-25T14:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.035389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.035449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.035467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.035492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.035509 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.138141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.138176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.138189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.138205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.138215 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.245107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.245175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.245194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.245313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.245368 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.348333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.348407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.348420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.348437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.348478 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.409011 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.409045 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.409116 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:07 crc kubenswrapper[4796]: E1125 14:26:07.409160 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.409207 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:07 crc kubenswrapper[4796]: E1125 14:26:07.409321 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:07 crc kubenswrapper[4796]: E1125 14:26:07.409381 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:07 crc kubenswrapper[4796]: E1125 14:26:07.409446 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.451351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.451421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.451438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.451462 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.451479 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.553701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.553747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.553761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.553781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.553796 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.656962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.657028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.657046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.657072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.657091 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.760012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.760061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.760073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.760128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.760139 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.863442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.863476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.863483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.863496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.863505 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.965816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.965870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.965887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.965908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:07 crc kubenswrapper[4796]: I1125 14:26:07.965927 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:07Z","lastTransitionTime":"2025-11-25T14:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.068742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.068795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.068808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.068826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.068840 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.171522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.171626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.171649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.171677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.171733 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.275070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.275130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.275149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.275174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.275193 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.377641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.377712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.377748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.377794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.377812 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.481264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.481346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.481370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.481397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.481416 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.584607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.584668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.584685 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.584709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.584727 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.687762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.687827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.687845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.687870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.687888 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.791043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.791109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.791135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.791165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.791187 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.895009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.895125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.895143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.895166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.895182 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.998402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.998455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.998471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.998496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:08 crc kubenswrapper[4796]: I1125 14:26:08.998512 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:08Z","lastTransitionTime":"2025-11-25T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.101641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.101702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.101749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.101779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.101798 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:09Z","lastTransitionTime":"2025-11-25T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.205022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.205080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.205097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.205122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.205141 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:09Z","lastTransitionTime":"2025-11-25T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.308546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.308637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.308656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.308680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.308698 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:09Z","lastTransitionTime":"2025-11-25T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.408286 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.408607 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.408651 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.408688 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:09 crc kubenswrapper[4796]: E1125 14:26:09.408823 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:09 crc kubenswrapper[4796]: E1125 14:26:09.409243 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:09 crc kubenswrapper[4796]: E1125 14:26:09.409451 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:09 crc kubenswrapper[4796]: E1125 14:26:09.409668 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.411365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.411429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.411453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.411480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.411501 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:09Z","lastTransitionTime":"2025-11-25T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.515156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.515233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.515251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.515273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.515291 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:09Z","lastTransitionTime":"2025-11-25T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.618341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.618415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.618439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.618467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.618489 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:09Z","lastTransitionTime":"2025-11-25T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.720806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.720840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.720852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.720866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.720877 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:09Z","lastTransitionTime":"2025-11-25T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.824034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.824094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.824106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.824127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.824151 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:09Z","lastTransitionTime":"2025-11-25T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.926444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.926504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.926523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.926545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:09 crc kubenswrapper[4796]: I1125 14:26:09.926560 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:09Z","lastTransitionTime":"2025-11-25T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.034519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.034644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.034695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.034722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.034744 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.137663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.137727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.137746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.137772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.137791 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.241358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.241418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.241437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.241461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.241478 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.344834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.344926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.344945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.344974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.344992 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.447939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.448008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.448021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.448045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.448057 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.551980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.552015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.552025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.552039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.552048 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.654749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.654827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.654849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.654878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.654899 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.758037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.758110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.758147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.758186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.758206 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.861650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.861706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.861728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.861759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.861780 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.964618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.964656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.964668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.964690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:10 crc kubenswrapper[4796]: I1125 14:26:10.964705 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:10Z","lastTransitionTime":"2025-11-25T14:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.067280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.067334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.067352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.067374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.067391 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.170311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.170371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.170390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.170412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.170429 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.273551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.273631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.273648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.273671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.273687 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.376901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.376958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.376977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.377000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.377021 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.408274 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.408309 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.408404 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.408606 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.408650 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.408789 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.408901 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.409036 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.480346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.480430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.480454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.480485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.480507 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.496090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.496136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.496153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.496176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.496194 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.516955 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.522226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.522309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.522333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.522363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.522386 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.542301 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.546822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.546891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.546915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.546948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.546970 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.567208 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.572296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.572346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.572365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.572390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.572409 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.591626 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.596432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.596610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.596736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.596768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.596787 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.616603 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"416e3262-df66-4d84-86f4-b2212e7ea3f7\\\",\\\"systemUUID\\\":\\\"666c950e-4620-4706-912b-93ef77d5c70a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:11 crc kubenswrapper[4796]: E1125 14:26:11.616887 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.618980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.619036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.619054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.619078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.619096 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.722120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.722182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.722199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.722225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.722242 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.825528 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.825659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.825693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.825726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.825749 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.929916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.930021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.930050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.930087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:11 crc kubenswrapper[4796]: I1125 14:26:11.930127 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:11Z","lastTransitionTime":"2025-11-25T14:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.033786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.033853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.033871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.033893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.033931 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.137417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.137489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.137511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.137542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.137563 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.239892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.239940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.239957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.239980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.239999 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.343295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.343367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.343390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.343419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.343440 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.432186 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486f6ef5-5af6-4f3b-95c5-6671b01e6333\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8138f1a15c606524d581b7f6a1df0a525c220956df5a8bbed60896f88a3a29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6293d39ca8c8441729d4bebee8cddb4408e0cfb5f8c596385bc1fe773689db0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de00e0e99c5bd903643a070ae5efdb84a565d30c8a2823733c6ec128859ef2ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.445977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.446024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.446040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.446062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.446079 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.453887 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a94dac03862e96404919e02cfea8e138cd26ea3fc3c77d34956c432b1bf3ddaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.475363 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e7385de109702052d827d335e080cbdc5768f3547ad9d3e7641d62a54214e09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://526b974ac6a50cd2d8c848997a0956abc28ed0f466c486ec1b1066206ba957a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.494482 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.516200 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ch8mf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:44Z\\\",\\\"message\\\":\\\"2025-11-25T14:24:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76\\\\n2025-11-25T14:24:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79685361-642c-4342-aae5-88fbfb152c76 to /host/opt/cni/bin/\\\\n2025-11-25T14:24:58Z [verbose] multus-daemon started\\\\n2025-11-25T14:24:58Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:25:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mz9q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ch8mf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.537009 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c683b765-b1f2-49b1-b29d-6466cda73ca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56cc567ae161b7a8bb9eb8fb77cfe437efec929b5e99c5709ea5ab9f26f70233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qc5nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-h6xfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.548987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.549060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.549082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.549112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.549136 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.558016 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f757e10-442d-44b6-aa1d-121bf848460e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://77bb71e0e54f1775d15613a46d20108d60646c9ca9c4c8518ad3aeba4e8f2d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://812463b79a0c55901cf740dd35df1619a443aa0dc66509d7553cd660311f58c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://812463b79a0c55901cf740dd35df1619a443aa0dc66509d7553cd660311f58c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.581881 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4b5234d-a2f9-45ac-87ba-8637e0672dd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 14:24:52.914458 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:24:52.915199 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:24:52.916189 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2760501505/tls.crt::/tmp/serving-cert-2760501505/tls.key\\\\\\\"\\\\nI1125 14:24:53.497260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:24:53.529131 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:24:53.529241 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:24:53.529279 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:24:53.529303 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:24:53.548142 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:24:53.548170 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548174 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:24:53.548178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:24:53.548181 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:24:53.548183 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:24:53.548186 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:24:53.548248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:24:53.550142 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.596621 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07d588f-1940-4a4b-a4a9-94451e43ec8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8gd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n4f9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.617564 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-w88nx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93354b1f-76e7-4d82-999f-8093919ba0f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8952a8bc3ebd62f576564daa6539ef70f24ff0ae0f02c01b7f38463e4f14dbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf7753dac7cbca6dbaa20243a305ad209eecd7b89539d5e53b984a218035e14a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6ac404afcfff15030ced95791e342f2264d326dbb91d324a0293597bd8919a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://538519a1755271ddcc80059832da46f4aa729ad5d64d3b6e9fbc999fba8d5a86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0be442b0fc816985568c545325a84b2625a1e85d37d49d7a71a36ef69c897131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f44ac753e69d1f0b55a6783b32117a3ee27ecfa94d73cdfa4af5e0021b5d6db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b648a980cfd0ab5b851b5503047ddce305ec38bffb22e5e2675db84018a93502\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:25:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rq6fl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-w88nx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.629474 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xphp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df57e51-3d48-434e-95e6-3d001fbf2871\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0ce4ff44bc6e216b4e0e568fe14bd001db446b47d2f953021cf81f8c0de6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8cnlh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xphp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.643898 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d223a119-11a5-4802-9e8e-645fdb31ea88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://021d2f3be55ec89048bafcd7e73eccf6bd883ad03e92be98479670ef2abde11b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ada54c8a99d59f4978026df480de5fa1fc20b957d8273eef164cca6ef8a79cc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2wqfc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:25:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsz8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.652079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.652146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.652170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.652240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.652267 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.660665 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.677016 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://398599fc392a563f3450b57cb1a3144c427a95518df3683cabb1f0b643070f87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.691246 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.714016 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:25:49Z\\\",\\\"message\\\":\\\".go:160\\\\nI1125 14:25:49.293527 6841 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:25:49.293774 6841 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.293984 6841 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:25:49.303338 6841 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:25:49.303402 6841 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:25:49.303449 6841 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:25:49.303482 6841 factory.go:656] Stopping watch factory\\\\nI1125 14:25:49.303515 6841 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:25:49.305989 6841 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1125 14:25:49.306025 6841 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1125 14:25:49.306140 6841 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:25:49.306223 6841 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:25:49.306346 6841 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:25:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:25:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8srjn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-22sz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.728477 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"447a616b-b891-4833-98e5-c5408231aece\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e898ed92981e7f3340ae53aa34928f0aa00dfb9be5464c60955ff9107bdbae8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d868f7a390763d85a4b449d223d6ee9fbc6f99da73b9887cf01c3e364412809b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0da41b1ad7c29895c550716e8325fa0ba9c6bf6d1fd16482df8751102b6cbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a479f5ec52a12bd47f277ecfceac00b5e57038ec2c49c77988496aa510c3b73a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:24:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:24:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.741638 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nz5r2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea9ffd59-232e-4973-8470-910389b782ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:24:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ea0978566887ab6585e0786b5af9444be7890904eefbf0b709512c0923a8aff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:24:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntq5c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:24:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nz5r2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:26:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.754746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.754831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.754853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.754878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.754895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.857389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.857452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.857471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.857496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.857514 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.960357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.960406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.960423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.960444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:12 crc kubenswrapper[4796]: I1125 14:26:12.960461 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:12Z","lastTransitionTime":"2025-11-25T14:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.062605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.062661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.062698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.062720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.062738 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.103135 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:13 crc kubenswrapper[4796]: E1125 14:26:13.103411 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:26:13 crc kubenswrapper[4796]: E1125 14:26:13.103540 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs podName:a07d588f-1940-4a4b-a4a9-94451e43ec8d nodeName:}" failed. No retries permitted until 2025-11-25 14:27:17.103511413 +0000 UTC m=+165.446620867 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs") pod "network-metrics-daemon-n4f9r" (UID: "a07d588f-1940-4a4b-a4a9-94451e43ec8d") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.166779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.166845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.166864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.166889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.166908 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.270795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.270868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.270892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.270921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.270943 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.373425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.373606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.373637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.373666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.373689 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.408911 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.409006 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.409040 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:13 crc kubenswrapper[4796]: E1125 14:26:13.409182 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.409639 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:13 crc kubenswrapper[4796]: E1125 14:26:13.411741 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:13 crc kubenswrapper[4796]: E1125 14:26:13.412206 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:13 crc kubenswrapper[4796]: E1125 14:26:13.412556 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.431605 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.476009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.476066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.476085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.476107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.476124 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.580445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.580549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.580567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.580691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.580708 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.684848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.684918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.684940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.684969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.684990 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.788435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.788540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.788558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.788638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.788663 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.891966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.892038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.892057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.892082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.892099 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.994863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.994945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.994968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.994998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:13 crc kubenswrapper[4796]: I1125 14:26:13.995019 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:13Z","lastTransitionTime":"2025-11-25T14:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.097790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.097840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.097857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.097879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.097898 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:14Z","lastTransitionTime":"2025-11-25T14:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.200540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.200682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.200704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.200730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.200748 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:14Z","lastTransitionTime":"2025-11-25T14:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.304178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.304220 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.304231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.304247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.304257 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:14Z","lastTransitionTime":"2025-11-25T14:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.408136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.408188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.408207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.408229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.408247 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:14Z","lastTransitionTime":"2025-11-25T14:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.511660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.511718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.511736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.511759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.511779 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:14Z","lastTransitionTime":"2025-11-25T14:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.615215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.615314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.615334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.615480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.615501 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:14Z","lastTransitionTime":"2025-11-25T14:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.718858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.718911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.718928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.718950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.718968 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:14Z","lastTransitionTime":"2025-11-25T14:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.823231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.823326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.823350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.823378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.823398 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:14Z","lastTransitionTime":"2025-11-25T14:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.926106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.926174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.926193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.926218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:14 crc kubenswrapper[4796]: I1125 14:26:14.926238 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:14Z","lastTransitionTime":"2025-11-25T14:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.029250 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.029305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.029322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.029346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.029366 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.131868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.131946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.131970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.131998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.132020 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.235348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.235408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.235424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.235447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.235463 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.338970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.339084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.339112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.339140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.339158 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.409301 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:15 crc kubenswrapper[4796]: E1125 14:26:15.409456 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.409531 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.409603 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:15 crc kubenswrapper[4796]: E1125 14:26:15.409749 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:15 crc kubenswrapper[4796]: E1125 14:26:15.409958 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.410252 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:15 crc kubenswrapper[4796]: E1125 14:26:15.410557 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.441482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.441521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.441541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.441562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.441607 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.544042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.544123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.544162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.544195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.544218 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.647105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.647186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.647201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.647237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.647254 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.750628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.750691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.750711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.750736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.750758 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.855258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.855326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.855344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.855369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.855388 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.958638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.958705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.958724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.958749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:15 crc kubenswrapper[4796]: I1125 14:26:15.958767 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:15Z","lastTransitionTime":"2025-11-25T14:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.062358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.062396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.062408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.062423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.062435 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.166069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.166139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.166161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.166192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.166215 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.269923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.269987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.270006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.270030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.270050 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.372946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.373332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.373346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.373362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.373374 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.410045 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:26:16 crc kubenswrapper[4796]: E1125 14:26:16.410317 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.476225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.476280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.476299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.476324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.476341 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.579453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.579521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.579540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.579567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.579613 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.683077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.683153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.683178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.683202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.683224 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.786280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.786357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.786382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.786410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.786464 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.889259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.889330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.889342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.889360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.889371 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.992661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.992699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.992708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.992723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:16 crc kubenswrapper[4796]: I1125 14:26:16.992733 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:16Z","lastTransitionTime":"2025-11-25T14:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.096194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.096265 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.096292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.096324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.096350 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:17Z","lastTransitionTime":"2025-11-25T14:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.199268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.199354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.199379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.199407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.199429 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:17Z","lastTransitionTime":"2025-11-25T14:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.302725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.302787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.302808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.302844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.302864 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:17Z","lastTransitionTime":"2025-11-25T14:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.405959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.406034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.406058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.406087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.406109 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:17Z","lastTransitionTime":"2025-11-25T14:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.409268 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.409363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:17 crc kubenswrapper[4796]: E1125 14:26:17.409430 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:17 crc kubenswrapper[4796]: E1125 14:26:17.409516 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.409630 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.409685 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:17 crc kubenswrapper[4796]: E1125 14:26:17.409791 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:17 crc kubenswrapper[4796]: E1125 14:26:17.409904 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.508222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.508258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.508267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.508283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.508293 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:17Z","lastTransitionTime":"2025-11-25T14:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.611316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.611390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.611434 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.611462 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.611483 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:17Z","lastTransitionTime":"2025-11-25T14:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.713894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.713937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.713948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.713963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.713975 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:17Z","lastTransitionTime":"2025-11-25T14:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.816646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.816707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.816725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.816748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.816765 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:17Z","lastTransitionTime":"2025-11-25T14:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.920271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.920356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.920381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.920412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:17 crc kubenswrapper[4796]: I1125 14:26:17.920438 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:17Z","lastTransitionTime":"2025-11-25T14:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.023403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.023531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.023555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.023620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.023646 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.127551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.127654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.127675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.127700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.127718 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.231097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.231152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.231168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.231190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.231209 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.334856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.334940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.334971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.334998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.335021 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.437904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.437977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.438000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.438042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.438061 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.540864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.540924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.540944 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.540967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.540985 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.643635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.643692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.643709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.643735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.643768 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.747401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.747457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.747480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.747512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.747534 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.849608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.849671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.849691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.849724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.849764 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.952646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.952716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.952753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.952791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:18 crc kubenswrapper[4796]: I1125 14:26:18.952814 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:18Z","lastTransitionTime":"2025-11-25T14:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.055101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.055156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.055172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.055194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.055211 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.157705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.157764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.157781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.157807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.157825 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.260969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.261005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.261015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.261030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.261041 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.364490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.364540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.364556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.364606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.364625 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.408512 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.408536 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.408688 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:19 crc kubenswrapper[4796]: E1125 14:26:19.408898 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.408947 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:19 crc kubenswrapper[4796]: E1125 14:26:19.409071 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:19 crc kubenswrapper[4796]: E1125 14:26:19.409183 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:19 crc kubenswrapper[4796]: E1125 14:26:19.409381 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.466968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.467032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.467052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.467079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.467100 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.570450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.570504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.570521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.570545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.570562 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.673997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.674031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.674039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.674055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.674067 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.777326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.777489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.777521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.777555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.777653 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.880643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.880700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.880717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.880742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.880764 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.984303 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.984361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.984379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.984402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:19 crc kubenswrapper[4796]: I1125 14:26:19.984420 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:19Z","lastTransitionTime":"2025-11-25T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.087124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.087190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.087209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.087247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.087278 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:20Z","lastTransitionTime":"2025-11-25T14:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.189435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.189487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.189499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.189519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.189532 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:20Z","lastTransitionTime":"2025-11-25T14:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.292136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.292195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.292213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.292235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.292251 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:20Z","lastTransitionTime":"2025-11-25T14:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.398988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.399075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.399103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.399150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.399193 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:20Z","lastTransitionTime":"2025-11-25T14:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.502167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.502219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.502235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.502257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.502273 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:20Z","lastTransitionTime":"2025-11-25T14:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.605850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.605922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.605946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.605971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.605988 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:20Z","lastTransitionTime":"2025-11-25T14:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.709156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.709207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.709229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.709257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.709280 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:20Z","lastTransitionTime":"2025-11-25T14:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.812885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.812975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.812990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.813011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.813049 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:20Z","lastTransitionTime":"2025-11-25T14:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.916020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.916062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.916077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.916109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:20 crc kubenswrapper[4796]: I1125 14:26:20.916122 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:20Z","lastTransitionTime":"2025-11-25T14:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.019382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.019468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.019482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.019535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.019548 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:21Z","lastTransitionTime":"2025-11-25T14:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.122952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.123023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.123040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.123066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.123153 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:21Z","lastTransitionTime":"2025-11-25T14:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.226488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.226546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.226564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.226616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.226637 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:21Z","lastTransitionTime":"2025-11-25T14:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.329537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.329648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.329672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.329704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.329731 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:21Z","lastTransitionTime":"2025-11-25T14:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.408609 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.408608 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.408739 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:21 crc kubenswrapper[4796]: E1125 14:26:21.408783 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.408739 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:21 crc kubenswrapper[4796]: E1125 14:26:21.408946 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:21 crc kubenswrapper[4796]: E1125 14:26:21.409069 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:21 crc kubenswrapper[4796]: E1125 14:26:21.409196 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.433201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.433252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.433268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.433288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.433306 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:21Z","lastTransitionTime":"2025-11-25T14:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.539865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.539935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.539954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.539977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.539994 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:21Z","lastTransitionTime":"2025-11-25T14:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.642994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.643056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.643078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.643102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.643121 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:21Z","lastTransitionTime":"2025-11-25T14:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.672184 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.672245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.672263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.672287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.672339 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:26:21Z","lastTransitionTime":"2025-11-25T14:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.739295 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj"] Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.739650 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.742629 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.742796 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.742888 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.743032 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.788743 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-w88nx" podStartSLOduration=87.788719012 podStartE2EDuration="1m27.788719012s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:21.788076322 +0000 UTC m=+110.131185766" watchObservedRunningTime="2025-11-25 14:26:21.788719012 +0000 UTC m=+110.131828456" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.800854 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xphp5" podStartSLOduration=87.800825938 podStartE2EDuration="1m27.800825938s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:21.800707324 +0000 UTC m=+110.143816818" watchObservedRunningTime="2025-11-25 14:26:21.800825938 +0000 UTC m=+110.143935402" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.801407 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e8e180fe-5115-4b82-a999-2b274b207f53-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.801723 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8e180fe-5115-4b82-a999-2b274b207f53-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.801865 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e8e180fe-5115-4b82-a999-2b274b207f53-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.802774 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e8e180fe-5115-4b82-a999-2b274b207f53-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.802954 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8e180fe-5115-4b82-a999-2b274b207f53-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.832671 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsz8g" podStartSLOduration=86.832649165 podStartE2EDuration="1m26.832649165s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:21.813936608 +0000 UTC m=+110.157046082" watchObservedRunningTime="2025-11-25 14:26:21.832649165 +0000 UTC m=+110.175758609" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.869099 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.869081168 podStartE2EDuration="8.869081168s" podCreationTimestamp="2025-11-25 14:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:21.868668735 +0000 UTC m=+110.211778169" watchObservedRunningTime="2025-11-25 14:26:21.869081168 +0000 UTC m=+110.212190592" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.903419 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8e180fe-5115-4b82-a999-2b274b207f53-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.903487 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e8e180fe-5115-4b82-a999-2b274b207f53-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.903516 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8e180fe-5115-4b82-a999-2b274b207f53-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.903559 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e8e180fe-5115-4b82-a999-2b274b207f53-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.903719 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e8e180fe-5115-4b82-a999-2b274b207f53-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.903781 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e8e180fe-5115-4b82-a999-2b274b207f53-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.903679 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e8e180fe-5115-4b82-a999-2b274b207f53-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.905242 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e8e180fe-5115-4b82-a999-2b274b207f53-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.911903 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8e180fe-5115-4b82-a999-2b274b207f53-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.915477 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nz5r2" podStartSLOduration=87.915464579 podStartE2EDuration="1m27.915464579s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:21.885839923 +0000 UTC m=+110.228949347" watchObservedRunningTime="2025-11-25 14:26:21.915464579 +0000 UTC m=+110.258574003" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.922021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8e180fe-5115-4b82-a999-2b274b207f53-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8jtlj\" (UID: \"e8e180fe-5115-4b82-a999-2b274b207f53\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.942936 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=53.942915296 podStartE2EDuration="53.942915296s" podCreationTimestamp="2025-11-25 14:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:21.942321887 +0000 UTC m=+110.285431311" watchObservedRunningTime="2025-11-25 14:26:21.942915296 +0000 UTC m=+110.286024730" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.960645 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.960621741 podStartE2EDuration="1m28.960621741s" podCreationTimestamp="2025-11-25 14:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:21.960048003 +0000 UTC m=+110.303157427" watchObservedRunningTime="2025-11-25 14:26:21.960621741 +0000 UTC m=+110.303731165" Nov 25 14:26:21 crc kubenswrapper[4796]: I1125 14:26:21.983872 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.983854883 podStartE2EDuration="1m28.983854883s" podCreationTimestamp="2025-11-25 14:24:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:21.983071698 +0000 UTC m=+110.326181122" watchObservedRunningTime="2025-11-25 14:26:21.983854883 +0000 UTC m=+110.326964307" Nov 25 14:26:22 crc kubenswrapper[4796]: I1125 14:26:22.064685 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" Nov 25 14:26:22 crc kubenswrapper[4796]: I1125 14:26:22.109853 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ch8mf" podStartSLOduration=88.109806295 podStartE2EDuration="1m28.109806295s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:22.092448751 +0000 UTC m=+110.435558195" watchObservedRunningTime="2025-11-25 14:26:22.109806295 +0000 UTC m=+110.452915719" Nov 25 14:26:22 crc kubenswrapper[4796]: I1125 14:26:22.110060 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podStartSLOduration=88.110051343 podStartE2EDuration="1m28.110051343s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:22.10745888 +0000 UTC m=+110.450568314" watchObservedRunningTime="2025-11-25 14:26:22.110051343 +0000 UTC m=+110.453160787" Nov 25 14:26:22 crc kubenswrapper[4796]: I1125 14:26:22.118349 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.118331877 podStartE2EDuration="16.118331877s" podCreationTimestamp="2025-11-25 14:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:22.116601873 +0000 UTC m=+110.459711297" watchObservedRunningTime="2025-11-25 14:26:22.118331877 +0000 UTC m=+110.461441301" Nov 25 14:26:23 crc kubenswrapper[4796]: I1125 14:26:23.089961 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" event={"ID":"e8e180fe-5115-4b82-a999-2b274b207f53","Type":"ContainerStarted","Data":"497d837d80bb46fdbb1fe0a80a992f389d7e7eb8b7f7d0eabc6e6539463db927"} Nov 25 14:26:23 crc kubenswrapper[4796]: I1125 14:26:23.090073 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" event={"ID":"e8e180fe-5115-4b82-a999-2b274b207f53","Type":"ContainerStarted","Data":"1c1f3849141d8966f0612ba2ce49c1d4bc379f4a56f04f2ed1c549337f23f0e9"} Nov 25 14:26:23 crc kubenswrapper[4796]: I1125 14:26:23.109911 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8jtlj" podStartSLOduration=89.10988375 podStartE2EDuration="1m29.10988375s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:23.109698404 +0000 UTC m=+111.452807858" watchObservedRunningTime="2025-11-25 14:26:23.10988375 +0000 UTC m=+111.452993214" Nov 25 14:26:23 crc kubenswrapper[4796]: I1125 14:26:23.409026 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:23 crc kubenswrapper[4796]: E1125 14:26:23.409514 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:23 crc kubenswrapper[4796]: I1125 14:26:23.409161 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:23 crc kubenswrapper[4796]: I1125 14:26:23.409072 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:23 crc kubenswrapper[4796]: E1125 14:26:23.409628 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:23 crc kubenswrapper[4796]: I1125 14:26:23.409231 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:23 crc kubenswrapper[4796]: E1125 14:26:23.409777 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:23 crc kubenswrapper[4796]: E1125 14:26:23.409942 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:25 crc kubenswrapper[4796]: I1125 14:26:25.409144 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:25 crc kubenswrapper[4796]: I1125 14:26:25.409408 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:25 crc kubenswrapper[4796]: E1125 14:26:25.410028 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:25 crc kubenswrapper[4796]: I1125 14:26:25.409494 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:25 crc kubenswrapper[4796]: I1125 14:26:25.409464 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:25 crc kubenswrapper[4796]: E1125 14:26:25.410489 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:25 crc kubenswrapper[4796]: E1125 14:26:25.410735 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:25 crc kubenswrapper[4796]: E1125 14:26:25.410832 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:27 crc kubenswrapper[4796]: I1125 14:26:27.412746 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:27 crc kubenswrapper[4796]: I1125 14:26:27.412814 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:27 crc kubenswrapper[4796]: E1125 14:26:27.412914 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:27 crc kubenswrapper[4796]: E1125 14:26:27.413060 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:27 crc kubenswrapper[4796]: I1125 14:26:27.413110 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:27 crc kubenswrapper[4796]: E1125 14:26:27.413178 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:27 crc kubenswrapper[4796]: I1125 14:26:27.413210 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:27 crc kubenswrapper[4796]: E1125 14:26:27.413255 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:27 crc kubenswrapper[4796]: I1125 14:26:27.414146 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:26:27 crc kubenswrapper[4796]: E1125 14:26:27.414314 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-22sz8_openshift-ovn-kubernetes(6eddc136-852e-4cf9-9f8a-e9ec94fc14d3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" Nov 25 14:26:29 crc kubenswrapper[4796]: I1125 14:26:29.409081 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:29 crc kubenswrapper[4796]: I1125 14:26:29.409165 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:29 crc kubenswrapper[4796]: I1125 14:26:29.409170 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:29 crc kubenswrapper[4796]: E1125 14:26:29.409247 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:29 crc kubenswrapper[4796]: I1125 14:26:29.409318 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:29 crc kubenswrapper[4796]: E1125 14:26:29.409368 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:29 crc kubenswrapper[4796]: E1125 14:26:29.409564 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:29 crc kubenswrapper[4796]: E1125 14:26:29.409741 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.131864 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ch8mf_7e00ee09-b0b0-4ae8-a51d-cc11fb99679b/kube-multus/1.log" Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.132653 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ch8mf_7e00ee09-b0b0-4ae8-a51d-cc11fb99679b/kube-multus/0.log" Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.132734 4796 generic.go:334] "Generic (PLEG): container finished" podID="7e00ee09-b0b0-4ae8-a51d-cc11fb99679b" containerID="3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9" exitCode=1 Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.132775 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ch8mf" event={"ID":"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b","Type":"ContainerDied","Data":"3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9"} Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.132824 4796 scope.go:117] "RemoveContainer" containerID="66c75759a1aae7e1738e1863a0d0ce4cc5a2854d91a3179383f2933ef89afeb1" Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.133154 4796 scope.go:117] "RemoveContainer" containerID="3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9" Nov 25 14:26:31 crc kubenswrapper[4796]: E1125 14:26:31.133318 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ch8mf_openshift-multus(7e00ee09-b0b0-4ae8-a51d-cc11fb99679b)\"" pod="openshift-multus/multus-ch8mf" podUID="7e00ee09-b0b0-4ae8-a51d-cc11fb99679b" Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.408782 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.408850 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.408782 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:31 crc kubenswrapper[4796]: E1125 14:26:31.408943 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:31 crc kubenswrapper[4796]: I1125 14:26:31.409009 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:31 crc kubenswrapper[4796]: E1125 14:26:31.409127 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:31 crc kubenswrapper[4796]: E1125 14:26:31.409308 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:31 crc kubenswrapper[4796]: E1125 14:26:31.409525 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:32 crc kubenswrapper[4796]: I1125 14:26:32.139969 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ch8mf_7e00ee09-b0b0-4ae8-a51d-cc11fb99679b/kube-multus/1.log" Nov 25 14:26:32 crc kubenswrapper[4796]: E1125 14:26:32.392438 4796 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 14:26:32 crc kubenswrapper[4796]: E1125 14:26:32.527434 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:26:33 crc kubenswrapper[4796]: I1125 14:26:33.408558 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:33 crc kubenswrapper[4796]: I1125 14:26:33.408555 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:33 crc kubenswrapper[4796]: I1125 14:26:33.408622 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:33 crc kubenswrapper[4796]: I1125 14:26:33.408682 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:33 crc kubenswrapper[4796]: E1125 14:26:33.408798 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:33 crc kubenswrapper[4796]: E1125 14:26:33.408902 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:33 crc kubenswrapper[4796]: E1125 14:26:33.408966 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:33 crc kubenswrapper[4796]: E1125 14:26:33.409182 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:35 crc kubenswrapper[4796]: I1125 14:26:35.408502 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:35 crc kubenswrapper[4796]: I1125 14:26:35.408551 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:35 crc kubenswrapper[4796]: I1125 14:26:35.408509 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:35 crc kubenswrapper[4796]: E1125 14:26:35.408650 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:35 crc kubenswrapper[4796]: I1125 14:26:35.408699 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:35 crc kubenswrapper[4796]: E1125 14:26:35.408756 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:35 crc kubenswrapper[4796]: E1125 14:26:35.408807 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:35 crc kubenswrapper[4796]: E1125 14:26:35.408970 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:37 crc kubenswrapper[4796]: I1125 14:26:37.408963 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:37 crc kubenswrapper[4796]: I1125 14:26:37.409043 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:37 crc kubenswrapper[4796]: I1125 14:26:37.409152 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:37 crc kubenswrapper[4796]: E1125 14:26:37.409153 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:37 crc kubenswrapper[4796]: I1125 14:26:37.409235 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:37 crc kubenswrapper[4796]: E1125 14:26:37.409467 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:37 crc kubenswrapper[4796]: E1125 14:26:37.409736 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:37 crc kubenswrapper[4796]: E1125 14:26:37.409914 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:37 crc kubenswrapper[4796]: E1125 14:26:37.529086 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:26:39 crc kubenswrapper[4796]: I1125 14:26:39.408936 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:39 crc kubenswrapper[4796]: I1125 14:26:39.409020 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:39 crc kubenswrapper[4796]: I1125 14:26:39.409045 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:39 crc kubenswrapper[4796]: I1125 14:26:39.408971 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:39 crc kubenswrapper[4796]: E1125 14:26:39.409126 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:39 crc kubenswrapper[4796]: E1125 14:26:39.409303 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:39 crc kubenswrapper[4796]: E1125 14:26:39.409547 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:39 crc kubenswrapper[4796]: E1125 14:26:39.409692 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:41 crc kubenswrapper[4796]: I1125 14:26:41.409126 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:41 crc kubenswrapper[4796]: I1125 14:26:41.409207 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:41 crc kubenswrapper[4796]: E1125 14:26:41.409859 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:41 crc kubenswrapper[4796]: I1125 14:26:41.409248 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:41 crc kubenswrapper[4796]: I1125 14:26:41.409840 4796 scope.go:117] "RemoveContainer" containerID="3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9" Nov 25 14:26:41 crc kubenswrapper[4796]: I1125 14:26:41.409233 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:41 crc kubenswrapper[4796]: E1125 14:26:41.410037 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:41 crc kubenswrapper[4796]: E1125 14:26:41.410144 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:41 crc kubenswrapper[4796]: E1125 14:26:41.410381 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:42 crc kubenswrapper[4796]: I1125 14:26:42.192206 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ch8mf_7e00ee09-b0b0-4ae8-a51d-cc11fb99679b/kube-multus/1.log" Nov 25 14:26:42 crc kubenswrapper[4796]: I1125 14:26:42.192297 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ch8mf" event={"ID":"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b","Type":"ContainerStarted","Data":"d921bf739f69487a8cd8d927c3aef547d59558cded69656dc16ab6fb56ee5f6e"} Nov 25 14:26:42 crc kubenswrapper[4796]: I1125 14:26:42.411879 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:26:42 crc kubenswrapper[4796]: E1125 14:26:42.530537 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.198165 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/3.log" Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.200327 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.203772 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerStarted","Data":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.204566 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.408974 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:43 crc kubenswrapper[4796]: E1125 14:26:43.409499 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.409075 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.409035 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:43 crc kubenswrapper[4796]: E1125 14:26:43.409649 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.409115 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:43 crc kubenswrapper[4796]: E1125 14:26:43.409813 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:43 crc kubenswrapper[4796]: E1125 14:26:43.409995 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.480645 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podStartSLOduration=109.480565234 podStartE2EDuration="1m49.480565234s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:43.257201879 +0000 UTC m=+131.600311303" watchObservedRunningTime="2025-11-25 14:26:43.480565234 +0000 UTC m=+131.823674688" Nov 25 14:26:43 crc kubenswrapper[4796]: I1125 14:26:43.481153 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n4f9r"] Nov 25 14:26:44 crc kubenswrapper[4796]: I1125 14:26:44.207009 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:44 crc kubenswrapper[4796]: E1125 14:26:44.207137 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:45 crc kubenswrapper[4796]: I1125 14:26:45.408277 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:45 crc kubenswrapper[4796]: E1125 14:26:45.408426 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:45 crc kubenswrapper[4796]: I1125 14:26:45.409437 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:45 crc kubenswrapper[4796]: E1125 14:26:45.409516 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:45 crc kubenswrapper[4796]: I1125 14:26:45.409673 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:45 crc kubenswrapper[4796]: E1125 14:26:45.409744 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:46 crc kubenswrapper[4796]: I1125 14:26:46.408558 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:46 crc kubenswrapper[4796]: E1125 14:26:46.408742 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n4f9r" podUID="a07d588f-1940-4a4b-a4a9-94451e43ec8d" Nov 25 14:26:47 crc kubenswrapper[4796]: I1125 14:26:47.408896 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:47 crc kubenswrapper[4796]: I1125 14:26:47.408898 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:47 crc kubenswrapper[4796]: I1125 14:26:47.409933 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:47 crc kubenswrapper[4796]: E1125 14:26:47.410193 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:26:47 crc kubenswrapper[4796]: E1125 14:26:47.410410 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:26:47 crc kubenswrapper[4796]: E1125 14:26:47.410684 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:26:48 crc kubenswrapper[4796]: I1125 14:26:48.409189 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:26:48 crc kubenswrapper[4796]: I1125 14:26:48.412464 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 14:26:48 crc kubenswrapper[4796]: I1125 14:26:48.412767 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 14:26:49 crc kubenswrapper[4796]: I1125 14:26:49.409325 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:26:49 crc kubenswrapper[4796]: I1125 14:26:49.409485 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:26:49 crc kubenswrapper[4796]: I1125 14:26:49.409332 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:26:49 crc kubenswrapper[4796]: I1125 14:26:49.412939 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 14:26:49 crc kubenswrapper[4796]: I1125 14:26:49.413635 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 14:26:49 crc kubenswrapper[4796]: I1125 14:26:49.413697 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 14:26:49 crc kubenswrapper[4796]: I1125 14:26:49.413790 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.317814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.384213 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.384967 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vzn94"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.385370 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.386176 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lvdx5"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.386320 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.388083 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.388105 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.389906 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.389960 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.390626 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.392219 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.393288 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vbgn5"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.394065 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.394380 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.395810 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.397136 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.398295 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.398685 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.398885 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.399666 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.407288 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.408165 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.408888 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x57qm"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.409185 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.410908 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.411343 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.412543 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fhbvb"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.413386 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w9vpf"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.413663 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.414218 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.415428 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dr5s9"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.416245 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tsl5t"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.416459 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.417071 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tsl5t" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.421832 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.421981 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.422044 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.422127 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.422136 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.422319 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.423176 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.423349 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.423493 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.431797 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.432092 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.431809 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.431823 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.431930 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.431941 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.431967 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.431986 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.432652 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.432849 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.433150 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.433281 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-95xvf"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.433896 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qmn8p"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.434380 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.434403 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.435718 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.435868 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.435981 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.439851 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.439907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/453a1a57-5017-420d-b2e5-2fef1a7721f5-serving-cert\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.439931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.449565 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-policies\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.449690 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklj8\" (UniqueName: \"kubernetes.io/projected/e4d591da-4385-4890-ab0d-1a1ee8d934ea-kube-api-access-pklj8\") pod \"openshift-apiserver-operator-796bbdcf4f-lnvrv\" (UID: \"e4d591da-4385-4890-ab0d-1a1ee8d934ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.449982 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93145a04-d9cc-419c-aac9-a236aa357d00-config\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.450045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-images\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.450083 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d591da-4385-4890-ab0d-1a1ee8d934ea-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lnvrv\" (UID: \"e4d591da-4385-4890-ab0d-1a1ee8d934ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.450302 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxlpf\" (UniqueName: \"kubernetes.io/projected/e561356f-4d50-4b6a-86f5-d7796e069802-kube-api-access-hxlpf\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.450346 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/453a1a57-5017-420d-b2e5-2fef1a7721f5-etcd-client\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.450528 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.450655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-config\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.450707 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzhhb\" (UniqueName: \"kubernetes.io/projected/0d8de494-9c7a-47e6-afa1-47007836acd8-kube-api-access-qzhhb\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.450927 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-serving-cert\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.450970 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d95t\" (UniqueName: \"kubernetes.io/projected/524a60f2-4fff-4571-9f11-99d5178fd2a3-kube-api-access-9d95t\") pod \"cluster-samples-operator-665b6dd947-bxtz9\" (UID: \"524a60f2-4fff-4571-9f11-99d5178fd2a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.451134 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-service-ca-bundle\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.451177 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-etcd-serving-ca\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.451212 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.451415 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.451458 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/524a60f2-4fff-4571-9f11-99d5178fd2a3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bxtz9\" (UID: \"524a60f2-4fff-4571-9f11-99d5178fd2a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.451718 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.451806 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.465988 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467214 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgk4s\" (UniqueName: \"kubernetes.io/projected/76da93ba-dcf4-4f52-982f-ce98a9718cc8-kube-api-access-bgk4s\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467479 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wz82\" (UniqueName: \"kubernetes.io/projected/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-kube-api-access-4wz82\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467549 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467624 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e561356f-4d50-4b6a-86f5-d7796e069802-etcd-service-ca\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467656 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/453a1a57-5017-420d-b2e5-2fef1a7721f5-audit-dir\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467695 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knsbb\" (UniqueName: \"kubernetes.io/projected/fa025925-c61e-49ae-ba50-79f4a401a20f-kube-api-access-knsbb\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467731 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-audit-policies\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467757 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467789 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-client-ca\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467820 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5991c579-d1dc-44d7-b62e-2465d9c2aa4b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k6xrl\" (UID: \"5991c579-d1dc-44d7-b62e-2465d9c2aa4b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467847 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-oauth-config\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467883 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93145a04-d9cc-419c-aac9-a236aa357d00-serving-cert\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clfcw\" (UniqueName: \"kubernetes.io/projected/5991c579-d1dc-44d7-b62e-2465d9c2aa4b-kube-api-access-clfcw\") pod \"openshift-config-operator-7777fb866f-k6xrl\" (UID: \"5991c579-d1dc-44d7-b62e-2465d9c2aa4b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467934 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-serving-cert\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467963 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e561356f-4d50-4b6a-86f5-d7796e069802-etcd-client\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.467995 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlfgb\" (UniqueName: \"kubernetes.io/projected/3d926d83-e3cc-4bf1-ba33-629f2c058590-kube-api-access-vlfgb\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468016 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/453a1a57-5017-420d-b2e5-2fef1a7721f5-node-pullsecrets\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468056 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/453a1a57-5017-420d-b2e5-2fef1a7721f5-encryption-config\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468090 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-trusted-ca-bundle\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468118 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kst7\" (UniqueName: \"kubernetes.io/projected/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-kube-api-access-4kst7\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468142 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93145a04-d9cc-419c-aac9-a236aa357d00-trusted-ca\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468171 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468201 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-dir\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468229 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/345ca21d-184d-4326-b97e-976d4190ae2f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-njqdf\" (UID: \"345ca21d-184d-4326-b97e-976d4190ae2f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468256 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-image-import-ca\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468277 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-client-ca\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468305 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5991c579-d1dc-44d7-b62e-2465d9c2aa4b-serving-cert\") pod \"openshift-config-operator-7777fb866f-k6xrl\" (UID: \"5991c579-d1dc-44d7-b62e-2465d9c2aa4b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468334 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-machine-approver-tls\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468360 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468410 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468440 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468469 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49vk7\" (UniqueName: \"kubernetes.io/projected/93145a04-d9cc-419c-aac9-a236aa357d00-kube-api-access-49vk7\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468493 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468522 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-etcd-client\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468555 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e561356f-4d50-4b6a-86f5-d7796e069802-config\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468604 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d591da-4385-4890-ab0d-1a1ee8d934ea-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lnvrv\" (UID: \"e4d591da-4385-4890-ab0d-1a1ee8d934ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468629 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbnwr\" (UniqueName: \"kubernetes.io/projected/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-kube-api-access-xbnwr\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468663 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e561356f-4d50-4b6a-86f5-d7796e069802-etcd-ca\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468690 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-config\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468719 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-config\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468746 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468770 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpmv\" (UniqueName: \"kubernetes.io/projected/453a1a57-5017-420d-b2e5-2fef1a7721f5-kube-api-access-hbpmv\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468799 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-serving-cert\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468826 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468854 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-encryption-config\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468882 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d926d83-e3cc-4bf1-ba33-629f2c058590-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468909 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8de494-9c7a-47e6-afa1-47007836acd8-serving-cert\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468936 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-auth-proxy-config\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468961 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g86ll\" (UniqueName: \"kubernetes.io/projected/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-kube-api-access-g86ll\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.468984 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-oauth-serving-cert\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469019 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d926d83-e3cc-4bf1-ba33-629f2c058590-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469081 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-serving-cert\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469319 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d926d83-e3cc-4bf1-ba33-629f2c058590-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-console-config\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469537 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-config\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469599 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e561356f-4d50-4b6a-86f5-d7796e069802-serving-cert\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469619 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lvdx5"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469640 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-audit\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469667 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vzn94"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469684 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.470520 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.471182 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.471309 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.471408 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.471500 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.470739 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.471848 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.470940 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.472142 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.470984 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.471090 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.471114 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.472904 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.473078 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.469667 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-config\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.473433 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-service-ca\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.473468 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/345ca21d-184d-4326-b97e-976d4190ae2f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-njqdf\" (UID: \"345ca21d-184d-4326-b97e-976d4190ae2f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.473504 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-config\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.473530 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtww4\" (UniqueName: \"kubernetes.io/projected/345ca21d-184d-4326-b97e-976d4190ae2f-kube-api-access-jtww4\") pod \"openshift-controller-manager-operator-756b6f6bc6-njqdf\" (UID: \"345ca21d-184d-4326-b97e-976d4190ae2f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.473566 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcjx6\" (UniqueName: \"kubernetes.io/projected/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-kube-api-access-pcjx6\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.473621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.473652 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lmvt\" (UniqueName: \"kubernetes.io/projected/c239761f-ade6-47eb-8fa5-f5178577ccb1-kube-api-access-5lmvt\") pod \"downloads-7954f5f757-tsl5t\" (UID: \"c239761f-ade6-47eb-8fa5-f5178577ccb1\") " pod="openshift-console/downloads-7954f5f757-tsl5t" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.473681 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-audit-dir\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.474467 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.474831 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.475335 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.475398 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.475502 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.475551 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.491274 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bzlvn"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.492000 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.492125 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.492462 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.492529 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.492782 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.492816 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.492923 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.492970 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493008 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493070 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493122 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493203 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493206 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493336 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493437 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493480 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493820 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493867 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.493950 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.494065 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.494178 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.494277 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.494924 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.495220 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.497554 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-c6rl5"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.498334 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.502059 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.502201 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.502298 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.503828 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.504649 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.505608 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.505799 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506171 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506264 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506335 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506489 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506598 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506352 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506383 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506416 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506442 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506677 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506492 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506700 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.507525 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.507708 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.506789 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.507915 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.507983 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.507930 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.507957 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.508147 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.508303 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.508443 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.508955 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.509290 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.509838 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.510104 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.510905 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.512066 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.512198 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.512959 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.513803 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.514165 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.514856 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.517657 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.526612 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.527999 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.533278 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.536471 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.541335 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.541638 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.543719 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.547508 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.550674 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.550670 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.550791 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.552043 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.552399 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.552637 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c8b7v"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.552976 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.553904 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.554119 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.554607 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.555215 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.555329 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.555644 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.555881 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.555947 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.556084 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.556253 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.556557 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.556716 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.556878 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-spphm"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.557077 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.557320 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7rkrg"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.557234 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.557754 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-spphm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.558611 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lndl"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.559531 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.559656 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.559695 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.559819 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.561378 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.565483 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.567006 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.568525 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.569563 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qmn8p"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.571386 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.573530 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.573560 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x57qm"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.575624 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w9vpf"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e561356f-4d50-4b6a-86f5-d7796e069802-etcd-ca\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576158 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-config\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576238 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-config\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576308 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576381 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-encryption-config\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576453 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpmv\" (UniqueName: \"kubernetes.io/projected/453a1a57-5017-420d-b2e5-2fef1a7721f5-kube-api-access-hbpmv\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576533 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-serving-cert\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576624 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576703 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d926d83-e3cc-4bf1-ba33-629f2c058590-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576776 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8de494-9c7a-47e6-afa1-47007836acd8-serving-cert\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576863 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-auth-proxy-config\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576950 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g86ll\" (UniqueName: \"kubernetes.io/projected/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-kube-api-access-g86ll\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577030 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-oauth-serving-cert\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577097 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d926d83-e3cc-4bf1-ba33-629f2c058590-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-serving-cert\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577258 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d926d83-e3cc-4bf1-ba33-629f2c058590-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577329 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-console-config\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-config\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577478 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e561356f-4d50-4b6a-86f5-d7796e069802-serving-cert\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-audit\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577640 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-config\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577710 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-service-ca\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577778 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/345ca21d-184d-4326-b97e-976d4190ae2f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-njqdf\" (UID: \"345ca21d-184d-4326-b97e-976d4190ae2f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-config\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577960 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtww4\" (UniqueName: \"kubernetes.io/projected/345ca21d-184d-4326-b97e-976d4190ae2f-kube-api-access-jtww4\") pod \"openshift-controller-manager-operator-756b6f6bc6-njqdf\" (UID: \"345ca21d-184d-4326-b97e-976d4190ae2f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578042 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-audit-dir\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcjx6\" (UniqueName: \"kubernetes.io/projected/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-kube-api-access-pcjx6\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578173 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578248 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lmvt\" (UniqueName: \"kubernetes.io/projected/c239761f-ade6-47eb-8fa5-f5178577ccb1-kube-api-access-5lmvt\") pod \"downloads-7954f5f757-tsl5t\" (UID: \"c239761f-ade6-47eb-8fa5-f5178577ccb1\") " pod="openshift-console/downloads-7954f5f757-tsl5t" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578330 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578402 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/453a1a57-5017-420d-b2e5-2fef1a7721f5-serving-cert\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578466 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578559 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-policies\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578643 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklj8\" (UniqueName: \"kubernetes.io/projected/e4d591da-4385-4890-ab0d-1a1ee8d934ea-kube-api-access-pklj8\") pod \"openshift-apiserver-operator-796bbdcf4f-lnvrv\" (UID: \"e4d591da-4385-4890-ab0d-1a1ee8d934ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578734 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93145a04-d9cc-419c-aac9-a236aa357d00-config\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578802 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-images\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578874 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d591da-4385-4890-ab0d-1a1ee8d934ea-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lnvrv\" (UID: \"e4d591da-4385-4890-ab0d-1a1ee8d934ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.578946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxlpf\" (UniqueName: \"kubernetes.io/projected/e561356f-4d50-4b6a-86f5-d7796e069802-kube-api-access-hxlpf\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.579009 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/453a1a57-5017-420d-b2e5-2fef1a7721f5-etcd-client\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.579080 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.579149 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-config\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.606813 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzhhb\" (UniqueName: \"kubernetes.io/projected/0d8de494-9c7a-47e6-afa1-47007836acd8-kube-api-access-qzhhb\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.606868 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-serving-cert\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.606892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d95t\" (UniqueName: \"kubernetes.io/projected/524a60f2-4fff-4571-9f11-99d5178fd2a3-kube-api-access-9d95t\") pod \"cluster-samples-operator-665b6dd947-bxtz9\" (UID: \"524a60f2-4fff-4571-9f11-99d5178fd2a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.606922 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.606946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-service-ca-bundle\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.606976 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-etcd-serving-ca\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.606996 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/524a60f2-4fff-4571-9f11-99d5178fd2a3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bxtz9\" (UID: \"524a60f2-4fff-4571-9f11-99d5178fd2a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.607014 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.607058 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.607095 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgk4s\" (UniqueName: \"kubernetes.io/projected/76da93ba-dcf4-4f52-982f-ce98a9718cc8-kube-api-access-bgk4s\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.607517 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-config\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.607705 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-config\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.577886 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-auth-proxy-config\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.608347 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-config\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.608347 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.608641 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-config\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.609188 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.576319 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.609261 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bzlvn"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.609276 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.609290 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vbgn5"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.609313 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.609323 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.609419 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-audit-dir\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.609790 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-serving-cert\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.609800 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d591da-4385-4890-ab0d-1a1ee8d934ea-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lnvrv\" (UID: \"e4d591da-4385-4890-ab0d-1a1ee8d934ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.610273 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8de494-9c7a-47e6-afa1-47007836acd8-serving-cert\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.610282 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-encryption-config\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.610377 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.611053 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-service-ca-bundle\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.585101 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/345ca21d-184d-4326-b97e-976d4190ae2f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-njqdf\" (UID: \"345ca21d-184d-4326-b97e-976d4190ae2f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.612090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-console-config\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.612464 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-etcd-serving-ca\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.621648 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.622358 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-images\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.623553 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.623796 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-oauth-serving-cert\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.623818 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.623833 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.623886 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wz82\" (UniqueName: \"kubernetes.io/projected/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-kube-api-access-4wz82\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.623907 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.623933 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e561356f-4d50-4b6a-86f5-d7796e069802-etcd-service-ca\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.624181 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/453a1a57-5017-420d-b2e5-2fef1a7721f5-audit-dir\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.627741 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knsbb\" (UniqueName: \"kubernetes.io/projected/fa025925-c61e-49ae-ba50-79f4a401a20f-kube-api-access-knsbb\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.628909 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-audit-policies\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629016 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-serving-cert\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.628545 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.628628 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/453a1a57-5017-420d-b2e5-2fef1a7721f5-etcd-client\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629223 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629321 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-client-ca\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5991c579-d1dc-44d7-b62e-2465d9c2aa4b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k6xrl\" (UID: \"5991c579-d1dc-44d7-b62e-2465d9c2aa4b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629526 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-oauth-config\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629638 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-serving-cert\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629757 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93145a04-d9cc-419c-aac9-a236aa357d00-serving-cert\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629857 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clfcw\" (UniqueName: \"kubernetes.io/projected/5991c579-d1dc-44d7-b62e-2465d9c2aa4b-kube-api-access-clfcw\") pod \"openshift-config-operator-7777fb866f-k6xrl\" (UID: \"5991c579-d1dc-44d7-b62e-2465d9c2aa4b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629964 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e561356f-4d50-4b6a-86f5-d7796e069802-etcd-client\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.630080 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlfgb\" (UniqueName: \"kubernetes.io/projected/3d926d83-e3cc-4bf1-ba33-629f2c058590-kube-api-access-vlfgb\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.630189 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/453a1a57-5017-420d-b2e5-2fef1a7721f5-node-pullsecrets\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.630316 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/453a1a57-5017-420d-b2e5-2fef1a7721f5-encryption-config\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.630424 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-trusted-ca-bundle\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.630551 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kst7\" (UniqueName: \"kubernetes.io/projected/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-kube-api-access-4kst7\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.630676 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93145a04-d9cc-419c-aac9-a236aa357d00-trusted-ca\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.630768 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.630884 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/345ca21d-184d-4326-b97e-976d4190ae2f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-njqdf\" (UID: \"345ca21d-184d-4326-b97e-976d4190ae2f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631381 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-dir\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631412 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-machine-approver-tls\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631433 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631455 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-image-import-ca\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631475 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-client-ca\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5991c579-d1dc-44d7-b62e-2465d9c2aa4b-serving-cert\") pod \"openshift-config-operator-7777fb866f-k6xrl\" (UID: \"5991c579-d1dc-44d7-b62e-2465d9c2aa4b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631513 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631546 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631586 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49vk7\" (UniqueName: \"kubernetes.io/projected/93145a04-d9cc-419c-aac9-a236aa357d00-kube-api-access-49vk7\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631606 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631624 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-etcd-client\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631640 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbnwr\" (UniqueName: \"kubernetes.io/projected/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-kube-api-access-xbnwr\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631661 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e561356f-4d50-4b6a-86f5-d7796e069802-config\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631681 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d591da-4385-4890-ab0d-1a1ee8d934ea-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lnvrv\" (UID: \"e4d591da-4385-4890-ab0d-1a1ee8d934ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631788 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-client-ca\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.627210 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-audit\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.632181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d591da-4385-4890-ab0d-1a1ee8d934ea-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lnvrv\" (UID: \"e4d591da-4385-4890-ab0d-1a1ee8d934ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.630966 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-config\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.632240 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-dir\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.632245 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-audit-policies\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.629476 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/524a60f2-4fff-4571-9f11-99d5178fd2a3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bxtz9\" (UID: \"524a60f2-4fff-4571-9f11-99d5178fd2a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.627660 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-config\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.631354 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5991c579-d1dc-44d7-b62e-2465d9c2aa4b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k6xrl\" (UID: \"5991c579-d1dc-44d7-b62e-2465d9c2aa4b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.633543 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/453a1a57-5017-420d-b2e5-2fef1a7721f5-image-import-ca\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.633976 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.634533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-client-ca\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.635020 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d926d83-e3cc-4bf1-ba33-629f2c058590-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.635492 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.635503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-service-ca\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.635962 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.638194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.639542 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93145a04-d9cc-419c-aac9-a236aa357d00-config\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.639991 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.645074 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/453a1a57-5017-420d-b2e5-2fef1a7721f5-node-pullsecrets\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.645791 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-policies\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.646081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93145a04-d9cc-419c-aac9-a236aa357d00-trusted-ca\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.646748 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-serving-cert\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.624383 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/453a1a57-5017-420d-b2e5-2fef1a7721f5-audit-dir\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.648367 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-serving-cert\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.649468 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d926d83-e3cc-4bf1-ba33-629f2c058590-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.649822 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-etcd-client\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.649830 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.649835 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5991c579-d1dc-44d7-b62e-2465d9c2aa4b-serving-cert\") pod \"openshift-config-operator-7777fb866f-k6xrl\" (UID: \"5991c579-d1dc-44d7-b62e-2465d9c2aa4b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.650311 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93145a04-d9cc-419c-aac9-a236aa357d00-serving-cert\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.649942 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.650597 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/453a1a57-5017-420d-b2e5-2fef1a7721f5-serving-cert\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.650948 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.651005 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.649979 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.652633 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.652800 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.654140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.654437 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.658533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.660551 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e561356f-4d50-4b6a-86f5-d7796e069802-config\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.660908 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/453a1a57-5017-420d-b2e5-2fef1a7721f5-encryption-config\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.661078 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e561356f-4d50-4b6a-86f5-d7796e069802-etcd-ca\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.664155 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-oauth-config\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.664739 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.666095 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-machine-approver-tls\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.666939 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e561356f-4d50-4b6a-86f5-d7796e069802-etcd-client\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.667935 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.671116 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/345ca21d-184d-4326-b97e-976d4190ae2f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-njqdf\" (UID: \"345ca21d-184d-4326-b97e-976d4190ae2f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.671505 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e561356f-4d50-4b6a-86f5-d7796e069802-serving-cert\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.671898 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-trusted-ca-bundle\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.675103 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e561356f-4d50-4b6a-86f5-d7796e069802-etcd-service-ca\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.680380 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c8b7v"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.681633 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dr5s9"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.682973 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mtw8r"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.683607 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.696239 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.697230 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.699393 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.700157 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-spphm"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.701102 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.702071 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.703816 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.705488 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tsl5t"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.707330 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fhbvb"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.707558 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.708354 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-95xvf"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.709461 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.711171 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.713045 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lndl"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.713067 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.713926 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7rkrg"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.715607 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w7tqs"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.716638 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.717090 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w7tqs"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.718090 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gbdnn"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.718682 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gbdnn" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.719184 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gbdnn"] Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.727070 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.766876 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.787254 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.806972 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.827409 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.859707 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.867914 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.887619 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.907612 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.927699 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.948045 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.967602 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 14:26:52 crc kubenswrapper[4796]: I1125 14:26:52.988128 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.007961 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.028333 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.054357 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.067961 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.108385 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.129061 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.148408 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.168897 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.188013 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.209341 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.228916 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.247977 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.267311 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.287629 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.307709 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.327425 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.347835 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.369135 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.388272 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.409416 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.429285 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.449634 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.468721 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.489422 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.508818 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.528566 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.548068 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.566171 4796 request.go:700] Waited for 1.012976083s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.567931 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.588060 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.608522 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.628797 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.647795 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.668467 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.688074 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.708353 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.728067 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.748420 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.767632 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.789325 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.808116 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.827500 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.848822 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.868070 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.888675 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.908034 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.928521 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.948347 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.968897 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 14:26:53 crc kubenswrapper[4796]: I1125 14:26:53.988953 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.007869 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.028706 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.048529 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.068406 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.089228 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.108422 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.127422 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.147820 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.169253 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.187670 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.208330 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.237424 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.279386 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpmv\" (UniqueName: \"kubernetes.io/projected/453a1a57-5017-420d-b2e5-2fef1a7721f5-kube-api-access-hbpmv\") pod \"apiserver-76f77b778f-vzn94\" (UID: \"453a1a57-5017-420d-b2e5-2fef1a7721f5\") " pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.294927 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d926d83-e3cc-4bf1-ba33-629f2c058590-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.308126 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklj8\" (UniqueName: \"kubernetes.io/projected/e4d591da-4385-4890-ab0d-1a1ee8d934ea-kube-api-access-pklj8\") pod \"openshift-apiserver-operator-796bbdcf4f-lnvrv\" (UID: \"e4d591da-4385-4890-ab0d-1a1ee8d934ea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.316505 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.328772 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtww4\" (UniqueName: \"kubernetes.io/projected/345ca21d-184d-4326-b97e-976d4190ae2f-kube-api-access-jtww4\") pod \"openshift-controller-manager-operator-756b6f6bc6-njqdf\" (UID: \"345ca21d-184d-4326-b97e-976d4190ae2f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.339927 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcjx6\" (UniqueName: \"kubernetes.io/projected/94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba-kube-api-access-pcjx6\") pod \"authentication-operator-69f744f599-fhbvb\" (UID: \"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.364159 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lmvt\" (UniqueName: \"kubernetes.io/projected/c239761f-ade6-47eb-8fa5-f5178577ccb1-kube-api-access-5lmvt\") pod \"downloads-7954f5f757-tsl5t\" (UID: \"c239761f-ade6-47eb-8fa5-f5178577ccb1\") " pod="openshift-console/downloads-7954f5f757-tsl5t" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.383212 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzhhb\" (UniqueName: \"kubernetes.io/projected/0d8de494-9c7a-47e6-afa1-47007836acd8-kube-api-access-qzhhb\") pod \"route-controller-manager-6576b87f9c-6tp55\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.405026 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g86ll\" (UniqueName: \"kubernetes.io/projected/0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2-kube-api-access-g86ll\") pod \"machine-approver-56656f9798-qj88x\" (UID: \"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.423353 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxlpf\" (UniqueName: \"kubernetes.io/projected/e561356f-4d50-4b6a-86f5-d7796e069802-kube-api-access-hxlpf\") pod \"etcd-operator-b45778765-qmn8p\" (UID: \"e561356f-4d50-4b6a-86f5-d7796e069802\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.442985 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d95t\" (UniqueName: \"kubernetes.io/projected/524a60f2-4fff-4571-9f11-99d5178fd2a3-kube-api-access-9d95t\") pod \"cluster-samples-operator-665b6dd947-bxtz9\" (UID: \"524a60f2-4fff-4571-9f11-99d5178fd2a3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.461552 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgk4s\" (UniqueName: \"kubernetes.io/projected/76da93ba-dcf4-4f52-982f-ce98a9718cc8-kube-api-access-bgk4s\") pod \"oauth-openshift-558db77b4-dr5s9\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.469060 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.482932 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.495021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wz82\" (UniqueName: \"kubernetes.io/projected/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-kube-api-access-4wz82\") pod \"controller-manager-879f6c89f-vbgn5\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.500820 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.518093 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.519607 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knsbb\" (UniqueName: \"kubernetes.io/projected/fa025925-c61e-49ae-ba50-79f4a401a20f-kube-api-access-knsbb\") pod \"console-f9d7485db-x57qm\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.520660 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clfcw\" (UniqueName: \"kubernetes.io/projected/5991c579-d1dc-44d7-b62e-2465d9c2aa4b-kube-api-access-clfcw\") pod \"openshift-config-operator-7777fb866f-k6xrl\" (UID: \"5991c579-d1dc-44d7-b62e-2465d9c2aa4b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.530329 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.536831 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tsl5t" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.542874 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.543467 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kst7\" (UniqueName: \"kubernetes.io/projected/67c0424c-b0ff-417d-bf4c-1cdcadd1ebac-kube-api-access-4kst7\") pod \"machine-api-operator-5694c8668f-lvdx5\" (UID: \"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.563702 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49vk7\" (UniqueName: \"kubernetes.io/projected/93145a04-d9cc-419c-aac9-a236aa357d00-kube-api-access-49vk7\") pod \"console-operator-58897d9998-w9vpf\" (UID: \"93145a04-d9cc-419c-aac9-a236aa357d00\") " pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.570727 4796 request.go:700] Waited for 1.932966625s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.570861 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.597986 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlfgb\" (UniqueName: \"kubernetes.io/projected/3d926d83-e3cc-4bf1-ba33-629f2c058590-kube-api-access-vlfgb\") pod \"cluster-image-registry-operator-dc59b4c8b-jjf82\" (UID: \"3d926d83-e3cc-4bf1-ba33-629f2c058590\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.602598 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.607632 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbnwr\" (UniqueName: \"kubernetes.io/projected/7b22dd74-4a14-454e-8b0d-9fdb57ce6653-kube-api-access-xbnwr\") pod \"apiserver-7bbb656c7d-l7rnd\" (UID: \"7b22dd74-4a14-454e-8b0d-9fdb57ce6653\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.615550 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.628892 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.647524 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.650669 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.655321 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.667327 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.668982 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.678503 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.689655 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.703430 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9"] Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.708513 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.728154 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.748766 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.768700 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.788866 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.795751 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv"] Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.806954 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.824011 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.834818 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.880971 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.881056 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-tls\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.881122 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-bound-sa-token\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.881152 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-certificates\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.881178 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-trusted-ca\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.881197 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5zp\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-kube-api-access-kd5zp\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.881219 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d07e3b8b-d9ae-40f6-901c-1be058824059-installation-pull-secrets\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.881255 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d07e3b8b-d9ae-40f6-901c-1be058824059-ca-trust-extracted\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: E1125 14:26:54.881430 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:55.381411192 +0000 UTC m=+143.724520616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.984230 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:54 crc kubenswrapper[4796]: E1125 14:26:54.984515 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:55.484485465 +0000 UTC m=+143.827594939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.984602 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lndl\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.984667 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcpqs\" (UniqueName: \"kubernetes.io/projected/786d9482-4e90-4a71-abf3-40bf3101fc86-kube-api-access-vcpqs\") pod \"multus-admission-controller-857f4d67dd-c8b7v\" (UID: \"786d9482-4e90-4a71-abf3-40bf3101fc86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.984785 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d0d05a9-8584-4102-8f50-6c0e36923a3e-trusted-ca\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.984810 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfa31788-625e-40c0-a671-90ff80e2f400-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zph9v\" (UID: \"bfa31788-625e-40c0-a671-90ff80e2f400\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.984834 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/371efde1-ba39-4eed-93e4-743cb2e7d996-certs\") pod \"machine-config-server-mtw8r\" (UID: \"371efde1-ba39-4eed-93e4-743cb2e7d996\") " pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.984856 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/979c3909-38ab-4fa1-9374-29d4ce969c8e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4g8pn\" (UID: \"979c3909-38ab-4fa1-9374-29d4ce969c8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.984882 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f667ef84-04a1-4c76-95d7-75648124470f-srv-cert\") pod \"catalog-operator-68c6474976-wdh6v\" (UID: \"f667ef84-04a1-4c76-95d7-75648124470f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.984908 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/63aeb87d-a8b1-40a5-95b9-e224d1bd968f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-64jzs\" (UID: \"63aeb87d-a8b1-40a5-95b9-e224d1bd968f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985004 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985029 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/adc5632d-1700-4ff8-a1db-7e53ee263222-images\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985051 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfnq8\" (UniqueName: \"kubernetes.io/projected/9ec4132b-350d-4ae8-9f11-38218dd0b07e-kube-api-access-tfnq8\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985141 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab48abd-b847-4828-99f2-e9d7d3312e94-config-volume\") pod \"collect-profiles-29401335-kz9sv\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985207 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-mountpoint-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985242 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdzh2\" (UniqueName: \"kubernetes.io/projected/4f72561b-ab6e-4eb5-b855-bbafd724ce5f-kube-api-access-wdzh2\") pod \"migrator-59844c95c7-rtsgw\" (UID: \"4f72561b-ab6e-4eb5-b855-bbafd724ce5f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985317 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7c7v\" (UniqueName: \"kubernetes.io/projected/adc5632d-1700-4ff8-a1db-7e53ee263222-kube-api-access-k7c7v\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985340 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9-config\") pod \"service-ca-operator-777779d784-4dt7c\" (UID: \"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985367 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-tls\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985418 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-bound-sa-token\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3c639e36-21d6-4cda-8fb4-08c52ea849c7-signing-cabundle\") pod \"service-ca-9c57cc56f-7rkrg\" (UID: \"3c639e36-21d6-4cda-8fb4-08c52ea849c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-certificates\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985537 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcw5l\" (UniqueName: \"kubernetes.io/projected/fab48abd-b847-4828-99f2-e9d7d3312e94-kube-api-access-dcw5l\") pod \"collect-profiles-29401335-kz9sv\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985593 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a85003e7-763f-4480-af83-0a827574dc25-srv-cert\") pod \"olm-operator-6b444d44fb-jvr77\" (UID: \"a85003e7-763f-4480-af83-0a827574dc25\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985629 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a85003e7-763f-4480-af83-0a827574dc25-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jvr77\" (UID: \"a85003e7-763f-4480-af83-0a827574dc25\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0162f2df-c29a-4c00-b445-67a9bae4c5ad-stats-auth\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d07e3b8b-d9ae-40f6-901c-1be058824059-installation-pull-secrets\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985737 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58s8t\" (UniqueName: \"kubernetes.io/projected/91c0402b-e438-4e77-8a6d-2765d09030e0-kube-api-access-58s8t\") pod \"dns-operator-744455d44c-bzlvn\" (UID: \"91c0402b-e438-4e77-8a6d-2765d09030e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985757 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdr65\" (UniqueName: \"kubernetes.io/projected/6b36d62b-3186-4e2b-961e-0e3553f75036-kube-api-access-gdr65\") pod \"dns-default-spphm\" (UID: \"6b36d62b-3186-4e2b-961e-0e3553f75036\") " pod="openshift-dns/dns-default-spphm" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.985780 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a628c8-c197-42af-a1e0-287e38308e8a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jz265\" (UID: \"c9a628c8-c197-42af-a1e0-287e38308e8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.986518 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zrq\" (UniqueName: \"kubernetes.io/projected/a85003e7-763f-4480-af83-0a827574dc25-kube-api-access-f9zrq\") pod \"olm-operator-6b444d44fb-jvr77\" (UID: \"a85003e7-763f-4480-af83-0a827574dc25\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:54 crc kubenswrapper[4796]: E1125 14:26:54.986719 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:55.486707514 +0000 UTC m=+143.829817018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.986910 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npdr5\" (UniqueName: \"kubernetes.io/projected/c9a628c8-c197-42af-a1e0-287e38308e8a-kube-api-access-npdr5\") pod \"kube-storage-version-migrator-operator-b67b599dd-jz265\" (UID: \"c9a628c8-c197-42af-a1e0-287e38308e8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.987379 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/adc5632d-1700-4ff8-a1db-7e53ee263222-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.988691 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-certificates\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.988938 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d07e3b8b-d9ae-40f6-901c-1be058824059-ca-trust-extracted\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.988994 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be17e585-a456-4306-9613-ac2498fc550c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j22d5\" (UID: \"be17e585-a456-4306-9613-ac2498fc550c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.989032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d0d05a9-8584-4102-8f50-6c0e36923a3e-metrics-tls\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.989068 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfa31788-625e-40c0-a671-90ff80e2f400-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zph9v\" (UID: \"bfa31788-625e-40c0-a671-90ff80e2f400\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.989093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a628c8-c197-42af-a1e0-287e38308e8a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jz265\" (UID: \"c9a628c8-c197-42af-a1e0-287e38308e8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.989502 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d07e3b8b-d9ae-40f6-901c-1be058824059-ca-trust-extracted\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.990454 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0162f2df-c29a-4c00-b445-67a9bae4c5ad-service-ca-bundle\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.992651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-csi-data-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.992684 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t9g7\" (UniqueName: \"kubernetes.io/projected/63aeb87d-a8b1-40a5-95b9-e224d1bd968f-kube-api-access-7t9g7\") pod \"control-plane-machine-set-operator-78cbb6b69f-64jzs\" (UID: \"63aeb87d-a8b1-40a5-95b9-e224d1bd968f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.993449 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8gt\" (UniqueName: \"kubernetes.io/projected/979c3909-38ab-4fa1-9374-29d4ce969c8e-kube-api-access-2p8gt\") pod \"machine-config-controller-84d6567774-4g8pn\" (UID: \"979c3909-38ab-4fa1-9374-29d4ce969c8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.993652 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/045f9b17-672f-4c8f-b397-95edf297e34f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zgvlv\" (UID: \"045f9b17-672f-4c8f-b397-95edf297e34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.993694 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmgwr\" (UniqueName: \"kubernetes.io/projected/0162f2df-c29a-4c00-b445-67a9bae4c5ad-kube-api-access-lmgwr\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.993722 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjfv\" (UniqueName: \"kubernetes.io/projected/506a2195-43f9-4a3a-ad03-ad55166c7e03-kube-api-access-vgjfv\") pod \"marketplace-operator-79b997595-9lndl\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.993924 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20ee15cc-da7a-4651-8ec6-a31683503069-cert\") pod \"ingress-canary-gbdnn\" (UID: \"20ee15cc-da7a-4651-8ec6-a31683503069\") " pod="openshift-ingress-canary/ingress-canary-gbdnn" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.993967 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-plugins-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.993992 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/786d9482-4e90-4a71-abf3-40bf3101fc86-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8b7v\" (UID: \"786d9482-4e90-4a71-abf3-40bf3101fc86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.994421 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f667ef84-04a1-4c76-95d7-75648124470f-profile-collector-cert\") pod \"catalog-operator-68c6474976-wdh6v\" (UID: \"f667ef84-04a1-4c76-95d7-75648124470f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.994822 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3c639e36-21d6-4cda-8fb4-08c52ea849c7-signing-key\") pod \"service-ca-9c57cc56f-7rkrg\" (UID: \"3c639e36-21d6-4cda-8fb4-08c52ea849c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.995474 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/045f9b17-672f-4c8f-b397-95edf297e34f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zgvlv\" (UID: \"045f9b17-672f-4c8f-b397-95edf297e34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.995513 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhhl\" (UniqueName: \"kubernetes.io/projected/e91d3d88-7e80-4c0d-8c97-405ba9487fe7-kube-api-access-lvhhl\") pod \"package-server-manager-789f6589d5-6sp9g\" (UID: \"e91d3d88-7e80-4c0d-8c97-405ba9487fe7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.995707 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zq9x\" (UniqueName: \"kubernetes.io/projected/8d0d05a9-8584-4102-8f50-6c0e36923a3e-kube-api-access-8zq9x\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.996852 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/adc5632d-1700-4ff8-a1db-7e53ee263222-proxy-tls\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.997248 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zphxs\" (UniqueName: \"kubernetes.io/projected/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-kube-api-access-zphxs\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.997621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/371efde1-ba39-4eed-93e4-743cb2e7d996-node-bootstrap-token\") pod \"machine-config-server-mtw8r\" (UID: \"371efde1-ba39-4eed-93e4-743cb2e7d996\") " pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.997963 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d0d05a9-8584-4102-8f50-6c0e36923a3e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.998240 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ec4132b-350d-4ae8-9f11-38218dd0b07e-webhook-cert\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.998373 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkvk\" (UniqueName: \"kubernetes.io/projected/20ee15cc-da7a-4651-8ec6-a31683503069-kube-api-access-6qkvk\") pod \"ingress-canary-gbdnn\" (UID: \"20ee15cc-da7a-4651-8ec6-a31683503069\") " pod="openshift-ingress-canary/ingress-canary-gbdnn" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.998545 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-socket-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.998847 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-registration-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.998878 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/045f9b17-672f-4c8f-b397-95edf297e34f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zgvlv\" (UID: \"045f9b17-672f-4c8f-b397-95edf297e34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.999134 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be17e585-a456-4306-9613-ac2498fc550c-config\") pod \"kube-controller-manager-operator-78b949d7b-j22d5\" (UID: \"be17e585-a456-4306-9613-ac2498fc550c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.999179 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91d3d88-7e80-4c0d-8c97-405ba9487fe7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6sp9g\" (UID: \"e91d3d88-7e80-4c0d-8c97-405ba9487fe7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.999230 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab48abd-b847-4828-99f2-e9d7d3312e94-secret-volume\") pod \"collect-profiles-29401335-kz9sv\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.999356 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be17e585-a456-4306-9613-ac2498fc550c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j22d5\" (UID: \"be17e585-a456-4306-9613-ac2498fc550c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.999501 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhkrs\" (UniqueName: \"kubernetes.io/projected/3c639e36-21d6-4cda-8fb4-08c52ea849c7-kube-api-access-bhkrs\") pod \"service-ca-9c57cc56f-7rkrg\" (UID: \"3c639e36-21d6-4cda-8fb4-08c52ea849c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.999775 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-trusted-ca\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:54 crc kubenswrapper[4796]: I1125 14:26:54.999825 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa31788-625e-40c0-a671-90ff80e2f400-config\") pod \"kube-apiserver-operator-766d6c64bb-zph9v\" (UID: \"bfa31788-625e-40c0-a671-90ff80e2f400\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.000013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0162f2df-c29a-4c00-b445-67a9bae4c5ad-default-certificate\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.000113 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd5zp\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-kube-api-access-kd5zp\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.000464 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91c0402b-e438-4e77-8a6d-2765d09030e0-metrics-tls\") pod \"dns-operator-744455d44c-bzlvn\" (UID: \"91c0402b-e438-4e77-8a6d-2765d09030e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.000548 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7b9d\" (UniqueName: \"kubernetes.io/projected/f667ef84-04a1-4c76-95d7-75648124470f-kube-api-access-l7b9d\") pod \"catalog-operator-68c6474976-wdh6v\" (UID: \"f667ef84-04a1-4c76-95d7-75648124470f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001014 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ec4132b-350d-4ae8-9f11-38218dd0b07e-apiservice-cert\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001125 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdd4\" (UniqueName: \"kubernetes.io/projected/371efde1-ba39-4eed-93e4-743cb2e7d996-kube-api-access-prdd4\") pod \"machine-config-server-mtw8r\" (UID: \"371efde1-ba39-4eed-93e4-743cb2e7d996\") " pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001160 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b36d62b-3186-4e2b-961e-0e3553f75036-config-volume\") pod \"dns-default-spphm\" (UID: \"6b36d62b-3186-4e2b-961e-0e3553f75036\") " pod="openshift-dns/dns-default-spphm" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001196 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b36d62b-3186-4e2b-961e-0e3553f75036-metrics-tls\") pod \"dns-default-spphm\" (UID: \"6b36d62b-3186-4e2b-961e-0e3553f75036\") " pod="openshift-dns/dns-default-spphm" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001502 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0162f2df-c29a-4c00-b445-67a9bae4c5ad-metrics-certs\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001542 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9-serving-cert\") pod \"service-ca-operator-777779d784-4dt7c\" (UID: \"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001647 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9ec4132b-350d-4ae8-9f11-38218dd0b07e-tmpfs\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001678 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2pb\" (UniqueName: \"kubernetes.io/projected/b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9-kube-api-access-ms2pb\") pod \"service-ca-operator-777779d784-4dt7c\" (UID: \"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001718 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/979c3909-38ab-4fa1-9374-29d4ce969c8e-proxy-tls\") pod \"machine-config-controller-84d6567774-4g8pn\" (UID: \"979c3909-38ab-4fa1-9374-29d4ce969c8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.001752 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lndl\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.002747 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-trusted-ca\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.008336 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d07e3b8b-d9ae-40f6-901c-1be058824059-installation-pull-secrets\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.008541 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-tls\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.033942 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-bound-sa-token\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.036697 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.040093 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.043709 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dr5s9"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.051410 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fhbvb"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.063999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd5zp\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-kube-api-access-kd5zp\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102296 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102495 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjfv\" (UniqueName: \"kubernetes.io/projected/506a2195-43f9-4a3a-ad03-ad55166c7e03-kube-api-access-vgjfv\") pod \"marketplace-operator-79b997595-9lndl\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102518 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/786d9482-4e90-4a71-abf3-40bf3101fc86-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8b7v\" (UID: \"786d9482-4e90-4a71-abf3-40bf3101fc86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102537 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20ee15cc-da7a-4651-8ec6-a31683503069-cert\") pod \"ingress-canary-gbdnn\" (UID: \"20ee15cc-da7a-4651-8ec6-a31683503069\") " pod="openshift-ingress-canary/ingress-canary-gbdnn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102554 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-plugins-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102585 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f667ef84-04a1-4c76-95d7-75648124470f-profile-collector-cert\") pod \"catalog-operator-68c6474976-wdh6v\" (UID: \"f667ef84-04a1-4c76-95d7-75648124470f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102609 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3c639e36-21d6-4cda-8fb4-08c52ea849c7-signing-key\") pod \"service-ca-9c57cc56f-7rkrg\" (UID: \"3c639e36-21d6-4cda-8fb4-08c52ea849c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102627 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/045f9b17-672f-4c8f-b397-95edf297e34f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zgvlv\" (UID: \"045f9b17-672f-4c8f-b397-95edf297e34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102646 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvhhl\" (UniqueName: \"kubernetes.io/projected/e91d3d88-7e80-4c0d-8c97-405ba9487fe7-kube-api-access-lvhhl\") pod \"package-server-manager-789f6589d5-6sp9g\" (UID: \"e91d3d88-7e80-4c0d-8c97-405ba9487fe7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zq9x\" (UniqueName: \"kubernetes.io/projected/8d0d05a9-8584-4102-8f50-6c0e36923a3e-kube-api-access-8zq9x\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102675 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/adc5632d-1700-4ff8-a1db-7e53ee263222-proxy-tls\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102691 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zphxs\" (UniqueName: \"kubernetes.io/projected/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-kube-api-access-zphxs\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/371efde1-ba39-4eed-93e4-743cb2e7d996-node-bootstrap-token\") pod \"machine-config-server-mtw8r\" (UID: \"371efde1-ba39-4eed-93e4-743cb2e7d996\") " pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102718 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d0d05a9-8584-4102-8f50-6c0e36923a3e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102733 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ec4132b-350d-4ae8-9f11-38218dd0b07e-webhook-cert\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102751 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkvk\" (UniqueName: \"kubernetes.io/projected/20ee15cc-da7a-4651-8ec6-a31683503069-kube-api-access-6qkvk\") pod \"ingress-canary-gbdnn\" (UID: \"20ee15cc-da7a-4651-8ec6-a31683503069\") " pod="openshift-ingress-canary/ingress-canary-gbdnn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102772 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-socket-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102788 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/045f9b17-672f-4c8f-b397-95edf297e34f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zgvlv\" (UID: \"045f9b17-672f-4c8f-b397-95edf297e34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102815 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-registration-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102832 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be17e585-a456-4306-9613-ac2498fc550c-config\") pod \"kube-controller-manager-operator-78b949d7b-j22d5\" (UID: \"be17e585-a456-4306-9613-ac2498fc550c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102847 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91d3d88-7e80-4c0d-8c97-405ba9487fe7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6sp9g\" (UID: \"e91d3d88-7e80-4c0d-8c97-405ba9487fe7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102864 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab48abd-b847-4828-99f2-e9d7d3312e94-secret-volume\") pod \"collect-profiles-29401335-kz9sv\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102882 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be17e585-a456-4306-9613-ac2498fc550c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j22d5\" (UID: \"be17e585-a456-4306-9613-ac2498fc550c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhkrs\" (UniqueName: \"kubernetes.io/projected/3c639e36-21d6-4cda-8fb4-08c52ea849c7-kube-api-access-bhkrs\") pod \"service-ca-9c57cc56f-7rkrg\" (UID: \"3c639e36-21d6-4cda-8fb4-08c52ea849c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102920 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa31788-625e-40c0-a671-90ff80e2f400-config\") pod \"kube-apiserver-operator-766d6c64bb-zph9v\" (UID: \"bfa31788-625e-40c0-a671-90ff80e2f400\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102937 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0162f2df-c29a-4c00-b445-67a9bae4c5ad-default-certificate\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102955 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91c0402b-e438-4e77-8a6d-2765d09030e0-metrics-tls\") pod \"dns-operator-744455d44c-bzlvn\" (UID: \"91c0402b-e438-4e77-8a6d-2765d09030e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102971 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7b9d\" (UniqueName: \"kubernetes.io/projected/f667ef84-04a1-4c76-95d7-75648124470f-kube-api-access-l7b9d\") pod \"catalog-operator-68c6474976-wdh6v\" (UID: \"f667ef84-04a1-4c76-95d7-75648124470f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.102994 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ec4132b-350d-4ae8-9f11-38218dd0b07e-apiservice-cert\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103009 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b36d62b-3186-4e2b-961e-0e3553f75036-config-volume\") pod \"dns-default-spphm\" (UID: \"6b36d62b-3186-4e2b-961e-0e3553f75036\") " pod="openshift-dns/dns-default-spphm" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103024 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b36d62b-3186-4e2b-961e-0e3553f75036-metrics-tls\") pod \"dns-default-spphm\" (UID: \"6b36d62b-3186-4e2b-961e-0e3553f75036\") " pod="openshift-dns/dns-default-spphm" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103039 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0162f2df-c29a-4c00-b445-67a9bae4c5ad-metrics-certs\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103056 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdd4\" (UniqueName: \"kubernetes.io/projected/371efde1-ba39-4eed-93e4-743cb2e7d996-kube-api-access-prdd4\") pod \"machine-config-server-mtw8r\" (UID: \"371efde1-ba39-4eed-93e4-743cb2e7d996\") " pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9-serving-cert\") pod \"service-ca-operator-777779d784-4dt7c\" (UID: \"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103085 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/979c3909-38ab-4fa1-9374-29d4ce969c8e-proxy-tls\") pod \"machine-config-controller-84d6567774-4g8pn\" (UID: \"979c3909-38ab-4fa1-9374-29d4ce969c8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103099 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lndl\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103115 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9ec4132b-350d-4ae8-9f11-38218dd0b07e-tmpfs\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103130 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2pb\" (UniqueName: \"kubernetes.io/projected/b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9-kube-api-access-ms2pb\") pod \"service-ca-operator-777779d784-4dt7c\" (UID: \"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103147 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lndl\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103163 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcpqs\" (UniqueName: \"kubernetes.io/projected/786d9482-4e90-4a71-abf3-40bf3101fc86-kube-api-access-vcpqs\") pod \"multus-admission-controller-857f4d67dd-c8b7v\" (UID: \"786d9482-4e90-4a71-abf3-40bf3101fc86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103181 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d0d05a9-8584-4102-8f50-6c0e36923a3e-trusted-ca\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103195 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f667ef84-04a1-4c76-95d7-75648124470f-srv-cert\") pod \"catalog-operator-68c6474976-wdh6v\" (UID: \"f667ef84-04a1-4c76-95d7-75648124470f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103212 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/63aeb87d-a8b1-40a5-95b9-e224d1bd968f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-64jzs\" (UID: \"63aeb87d-a8b1-40a5-95b9-e224d1bd968f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103245 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfa31788-625e-40c0-a671-90ff80e2f400-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zph9v\" (UID: \"bfa31788-625e-40c0-a671-90ff80e2f400\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103259 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/371efde1-ba39-4eed-93e4-743cb2e7d996-certs\") pod \"machine-config-server-mtw8r\" (UID: \"371efde1-ba39-4eed-93e4-743cb2e7d996\") " pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103273 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/979c3909-38ab-4fa1-9374-29d4ce969c8e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4g8pn\" (UID: \"979c3909-38ab-4fa1-9374-29d4ce969c8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/adc5632d-1700-4ff8-a1db-7e53ee263222-images\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfnq8\" (UniqueName: \"kubernetes.io/projected/9ec4132b-350d-4ae8-9f11-38218dd0b07e-kube-api-access-tfnq8\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103328 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab48abd-b847-4828-99f2-e9d7d3312e94-config-volume\") pod \"collect-profiles-29401335-kz9sv\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103346 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-mountpoint-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103361 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdzh2\" (UniqueName: \"kubernetes.io/projected/4f72561b-ab6e-4eb5-b855-bbafd724ce5f-kube-api-access-wdzh2\") pod \"migrator-59844c95c7-rtsgw\" (UID: \"4f72561b-ab6e-4eb5-b855-bbafd724ce5f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103378 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7c7v\" (UniqueName: \"kubernetes.io/projected/adc5632d-1700-4ff8-a1db-7e53ee263222-kube-api-access-k7c7v\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103392 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9-config\") pod \"service-ca-operator-777779d784-4dt7c\" (UID: \"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103408 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3c639e36-21d6-4cda-8fb4-08c52ea849c7-signing-cabundle\") pod \"service-ca-9c57cc56f-7rkrg\" (UID: \"3c639e36-21d6-4cda-8fb4-08c52ea849c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103430 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcw5l\" (UniqueName: \"kubernetes.io/projected/fab48abd-b847-4828-99f2-e9d7d3312e94-kube-api-access-dcw5l\") pod \"collect-profiles-29401335-kz9sv\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a85003e7-763f-4480-af83-0a827574dc25-srv-cert\") pod \"olm-operator-6b444d44fb-jvr77\" (UID: \"a85003e7-763f-4480-af83-0a827574dc25\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103462 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a85003e7-763f-4480-af83-0a827574dc25-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jvr77\" (UID: \"a85003e7-763f-4480-af83-0a827574dc25\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103477 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0162f2df-c29a-4c00-b445-67a9bae4c5ad-stats-auth\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58s8t\" (UniqueName: \"kubernetes.io/projected/91c0402b-e438-4e77-8a6d-2765d09030e0-kube-api-access-58s8t\") pod \"dns-operator-744455d44c-bzlvn\" (UID: \"91c0402b-e438-4e77-8a6d-2765d09030e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103507 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdr65\" (UniqueName: \"kubernetes.io/projected/6b36d62b-3186-4e2b-961e-0e3553f75036-kube-api-access-gdr65\") pod \"dns-default-spphm\" (UID: \"6b36d62b-3186-4e2b-961e-0e3553f75036\") " pod="openshift-dns/dns-default-spphm" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103522 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a628c8-c197-42af-a1e0-287e38308e8a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jz265\" (UID: \"c9a628c8-c197-42af-a1e0-287e38308e8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103538 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9zrq\" (UniqueName: \"kubernetes.io/projected/a85003e7-763f-4480-af83-0a827574dc25-kube-api-access-f9zrq\") pod \"olm-operator-6b444d44fb-jvr77\" (UID: \"a85003e7-763f-4480-af83-0a827574dc25\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103553 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npdr5\" (UniqueName: \"kubernetes.io/projected/c9a628c8-c197-42af-a1e0-287e38308e8a-kube-api-access-npdr5\") pod \"kube-storage-version-migrator-operator-b67b599dd-jz265\" (UID: \"c9a628c8-c197-42af-a1e0-287e38308e8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103589 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/adc5632d-1700-4ff8-a1db-7e53ee263222-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d0d05a9-8584-4102-8f50-6c0e36923a3e-metrics-tls\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103620 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be17e585-a456-4306-9613-ac2498fc550c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j22d5\" (UID: \"be17e585-a456-4306-9613-ac2498fc550c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103636 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfa31788-625e-40c0-a671-90ff80e2f400-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zph9v\" (UID: \"bfa31788-625e-40c0-a671-90ff80e2f400\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103650 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a628c8-c197-42af-a1e0-287e38308e8a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jz265\" (UID: \"c9a628c8-c197-42af-a1e0-287e38308e8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103664 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0162f2df-c29a-4c00-b445-67a9bae4c5ad-service-ca-bundle\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103681 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-csi-data-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103696 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t9g7\" (UniqueName: \"kubernetes.io/projected/63aeb87d-a8b1-40a5-95b9-e224d1bd968f-kube-api-access-7t9g7\") pod \"control-plane-machine-set-operator-78cbb6b69f-64jzs\" (UID: \"63aeb87d-a8b1-40a5-95b9-e224d1bd968f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103713 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8gt\" (UniqueName: \"kubernetes.io/projected/979c3909-38ab-4fa1-9374-29d4ce969c8e-kube-api-access-2p8gt\") pod \"machine-config-controller-84d6567774-4g8pn\" (UID: \"979c3909-38ab-4fa1-9374-29d4ce969c8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103729 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/045f9b17-672f-4c8f-b397-95edf297e34f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zgvlv\" (UID: \"045f9b17-672f-4c8f-b397-95edf297e34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.103746 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmgwr\" (UniqueName: \"kubernetes.io/projected/0162f2df-c29a-4c00-b445-67a9bae4c5ad-kube-api-access-lmgwr\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.103942 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:55.603928246 +0000 UTC m=+143.947037660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.106751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lndl\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.107516 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-mountpoint-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.107808 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab48abd-b847-4828-99f2-e9d7d3312e94-config-volume\") pod \"collect-profiles-29401335-kz9sv\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.108266 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9-config\") pod \"service-ca-operator-777779d784-4dt7c\" (UID: \"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.108962 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3c639e36-21d6-4cda-8fb4-08c52ea849c7-signing-cabundle\") pod \"service-ca-9c57cc56f-7rkrg\" (UID: \"3c639e36-21d6-4cda-8fb4-08c52ea849c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.110238 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a85003e7-763f-4480-af83-0a827574dc25-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jvr77\" (UID: \"a85003e7-763f-4480-af83-0a827574dc25\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.110524 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-plugins-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.110587 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/786d9482-4e90-4a71-abf3-40bf3101fc86-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8b7v\" (UID: \"786d9482-4e90-4a71-abf3-40bf3101fc86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.110919 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/adc5632d-1700-4ff8-a1db-7e53ee263222-images\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.111360 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/979c3909-38ab-4fa1-9374-29d4ce969c8e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4g8pn\" (UID: \"979c3909-38ab-4fa1-9374-29d4ce969c8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.112258 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/adc5632d-1700-4ff8-a1db-7e53ee263222-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.112427 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-csi-data-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.113052 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b36d62b-3186-4e2b-961e-0e3553f75036-config-volume\") pod \"dns-default-spphm\" (UID: \"6b36d62b-3186-4e2b-961e-0e3553f75036\") " pod="openshift-dns/dns-default-spphm" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.114018 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0162f2df-c29a-4c00-b445-67a9bae4c5ad-service-ca-bundle\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.114940 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a628c8-c197-42af-a1e0-287e38308e8a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jz265\" (UID: \"c9a628c8-c197-42af-a1e0-287e38308e8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.115494 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/045f9b17-672f-4c8f-b397-95edf297e34f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zgvlv\" (UID: \"045f9b17-672f-4c8f-b397-95edf297e34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.116414 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a85003e7-763f-4480-af83-0a827574dc25-srv-cert\") pod \"olm-operator-6b444d44fb-jvr77\" (UID: \"a85003e7-763f-4480-af83-0a827574dc25\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.116795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-socket-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.117202 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91c0402b-e438-4e77-8a6d-2765d09030e0-metrics-tls\") pod \"dns-operator-744455d44c-bzlvn\" (UID: \"91c0402b-e438-4e77-8a6d-2765d09030e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.117507 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfa31788-625e-40c0-a671-90ff80e2f400-config\") pod \"kube-apiserver-operator-766d6c64bb-zph9v\" (UID: \"bfa31788-625e-40c0-a671-90ff80e2f400\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.117858 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be17e585-a456-4306-9613-ac2498fc550c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j22d5\" (UID: \"be17e585-a456-4306-9613-ac2498fc550c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.118034 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0162f2df-c29a-4c00-b445-67a9bae4c5ad-metrics-certs\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.118192 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/adc5632d-1700-4ff8-a1db-7e53ee263222-proxy-tls\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.118517 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d0d05a9-8584-4102-8f50-6c0e36923a3e-metrics-tls\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.119589 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d0d05a9-8584-4102-8f50-6c0e36923a3e-trusted-ca\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.120371 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-registration-dir\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.120827 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20ee15cc-da7a-4651-8ec6-a31683503069-cert\") pod \"ingress-canary-gbdnn\" (UID: \"20ee15cc-da7a-4651-8ec6-a31683503069\") " pod="openshift-ingress-canary/ingress-canary-gbdnn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.123278 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be17e585-a456-4306-9613-ac2498fc550c-config\") pod \"kube-controller-manager-operator-78b949d7b-j22d5\" (UID: \"be17e585-a456-4306-9613-ac2498fc550c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.127274 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfa31788-625e-40c0-a671-90ff80e2f400-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zph9v\" (UID: \"bfa31788-625e-40c0-a671-90ff80e2f400\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.127749 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9ec4132b-350d-4ae8-9f11-38218dd0b07e-webhook-cert\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.128100 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9-serving-cert\") pod \"service-ca-operator-777779d784-4dt7c\" (UID: \"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.128534 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f667ef84-04a1-4c76-95d7-75648124470f-srv-cert\") pod \"catalog-operator-68c6474976-wdh6v\" (UID: \"f667ef84-04a1-4c76-95d7-75648124470f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.129166 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e91d3d88-7e80-4c0d-8c97-405ba9487fe7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6sp9g\" (UID: \"e91d3d88-7e80-4c0d-8c97-405ba9487fe7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.129716 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/371efde1-ba39-4eed-93e4-743cb2e7d996-certs\") pod \"machine-config-server-mtw8r\" (UID: \"371efde1-ba39-4eed-93e4-743cb2e7d996\") " pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.129857 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab48abd-b847-4828-99f2-e9d7d3312e94-secret-volume\") pod \"collect-profiles-29401335-kz9sv\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.130929 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b36d62b-3186-4e2b-961e-0e3553f75036-metrics-tls\") pod \"dns-default-spphm\" (UID: \"6b36d62b-3186-4e2b-961e-0e3553f75036\") " pod="openshift-dns/dns-default-spphm" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.131864 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3c639e36-21d6-4cda-8fb4-08c52ea849c7-signing-key\") pod \"service-ca-9c57cc56f-7rkrg\" (UID: \"3c639e36-21d6-4cda-8fb4-08c52ea849c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.132353 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9ec4132b-350d-4ae8-9f11-38218dd0b07e-tmpfs\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.134700 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0162f2df-c29a-4c00-b445-67a9bae4c5ad-default-certificate\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.137265 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/979c3909-38ab-4fa1-9374-29d4ce969c8e-proxy-tls\") pod \"machine-config-controller-84d6567774-4g8pn\" (UID: \"979c3909-38ab-4fa1-9374-29d4ce969c8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.137967 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/045f9b17-672f-4c8f-b397-95edf297e34f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zgvlv\" (UID: \"045f9b17-672f-4c8f-b397-95edf297e34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.138474 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lndl\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.139046 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/63aeb87d-a8b1-40a5-95b9-e224d1bd968f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-64jzs\" (UID: \"63aeb87d-a8b1-40a5-95b9-e224d1bd968f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.140740 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a628c8-c197-42af-a1e0-287e38308e8a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jz265\" (UID: \"c9a628c8-c197-42af-a1e0-287e38308e8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.142404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9ec4132b-350d-4ae8-9f11-38218dd0b07e-apiservice-cert\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.143081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0162f2df-c29a-4c00-b445-67a9bae4c5ad-stats-auth\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.143555 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f667ef84-04a1-4c76-95d7-75648124470f-profile-collector-cert\") pod \"catalog-operator-68c6474976-wdh6v\" (UID: \"f667ef84-04a1-4c76-95d7-75648124470f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.144177 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/371efde1-ba39-4eed-93e4-743cb2e7d996-node-bootstrap-token\") pod \"machine-config-server-mtw8r\" (UID: \"371efde1-ba39-4eed-93e4-743cb2e7d996\") " pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.149233 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmgwr\" (UniqueName: \"kubernetes.io/projected/0162f2df-c29a-4c00-b445-67a9bae4c5ad-kube-api-access-lmgwr\") pod \"router-default-5444994796-c6rl5\" (UID: \"0162f2df-c29a-4c00-b445-67a9bae4c5ad\") " pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.149774 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qmn8p"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.153555 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vzn94"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.159205 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tsl5t"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.163530 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjfv\" (UniqueName: \"kubernetes.io/projected/506a2195-43f9-4a3a-ad03-ad55166c7e03-kube-api-access-vgjfv\") pod \"marketplace-operator-79b997595-9lndl\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.170127 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x57qm"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.171710 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.194831 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2pb\" (UniqueName: \"kubernetes.io/projected/b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9-kube-api-access-ms2pb\") pod \"service-ca-operator-777779d784-4dt7c\" (UID: \"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:55 crc kubenswrapper[4796]: W1125 14:26:55.196408 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc239761f_ade6_47eb_8fa5_f5178577ccb1.slice/crio-51552cfc081d9a34c302684b245ff4b883fba504b9381228498fed64a03916ec WatchSource:0}: Error finding container 51552cfc081d9a34c302684b245ff4b883fba504b9381228498fed64a03916ec: Status 404 returned error can't find the container with id 51552cfc081d9a34c302684b245ff4b883fba504b9381228498fed64a03916ec Nov 25 14:26:55 crc kubenswrapper[4796]: W1125 14:26:55.197799 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod453a1a57_5017_420d_b2e5_2fef1a7721f5.slice/crio-61dbded2cc72dcdd9d8371602bf558afb30e9e6281ec09fb9207f6ffdb9ca6f9 WatchSource:0}: Error finding container 61dbded2cc72dcdd9d8371602bf558afb30e9e6281ec09fb9207f6ffdb9ca6f9: Status 404 returned error can't find the container with id 61dbded2cc72dcdd9d8371602bf558afb30e9e6281ec09fb9207f6ffdb9ca6f9 Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.204658 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.205494 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:55.705479707 +0000 UTC m=+144.048589131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.213842 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcpqs\" (UniqueName: \"kubernetes.io/projected/786d9482-4e90-4a71-abf3-40bf3101fc86-kube-api-access-vcpqs\") pod \"multus-admission-controller-857f4d67dd-c8b7v\" (UID: \"786d9482-4e90-4a71-abf3-40bf3101fc86\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.218625 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.223273 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vbgn5"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.224861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdzh2\" (UniqueName: \"kubernetes.io/projected/4f72561b-ab6e-4eb5-b855-bbafd724ce5f-kube-api-access-wdzh2\") pod \"migrator-59844c95c7-rtsgw\" (UID: \"4f72561b-ab6e-4eb5-b855-bbafd724ce5f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.229654 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lvdx5"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.243153 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7c7v\" (UniqueName: \"kubernetes.io/projected/adc5632d-1700-4ff8-a1db-7e53ee263222-kube-api-access-k7c7v\") pod \"machine-config-operator-74547568cd-vkwpv\" (UID: \"adc5632d-1700-4ff8-a1db-7e53ee263222\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.245911 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.252430 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" Nov 25 14:26:55 crc kubenswrapper[4796]: W1125 14:26:55.259295 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c0424c_b0ff_417d_bf4c_1cdcadd1ebac.slice/crio-352489902a14f7f9eab138d66b79c76d3ae6ad754b339eae13414fa3d1a70f72 WatchSource:0}: Error finding container 352489902a14f7f9eab138d66b79c76d3ae6ad754b339eae13414fa3d1a70f72: Status 404 returned error can't find the container with id 352489902a14f7f9eab138d66b79c76d3ae6ad754b339eae13414fa3d1a70f72 Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.260508 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcw5l\" (UniqueName: \"kubernetes.io/projected/fab48abd-b847-4828-99f2-e9d7d3312e94-kube-api-access-dcw5l\") pod \"collect-profiles-29401335-kz9sv\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:55 crc kubenswrapper[4796]: W1125 14:26:55.260646 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b75fc2c_7703_4bee_9e6b_6ea32511fc42.slice/crio-e480e82554d122c238f5e52ef172c4abd6ce85e85ba7b982886014a424488321 WatchSource:0}: Error finding container e480e82554d122c238f5e52ef172c4abd6ce85e85ba7b982886014a424488321: Status 404 returned error can't find the container with id e480e82554d122c238f5e52ef172c4abd6ce85e85ba7b982886014a424488321 Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.267395 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.269990 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.275768 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:26:55 crc kubenswrapper[4796]: W1125 14:26:55.283310 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5991c579_d1dc_44d7_b62e_2465d9c2aa4b.slice/crio-c816420b6c8c025170d703a7e97da76d9f7b0437624242d5b15630ce88a5efdc WatchSource:0}: Error finding container c816420b6c8c025170d703a7e97da76d9f7b0437624242d5b15630ce88a5efdc: Status 404 returned error can't find the container with id c816420b6c8c025170d703a7e97da76d9f7b0437624242d5b15630ce88a5efdc Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.286407 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfnq8\" (UniqueName: \"kubernetes.io/projected/9ec4132b-350d-4ae8-9f11-38218dd0b07e-kube-api-access-tfnq8\") pod \"packageserver-d55dfcdfc-b72mt\" (UID: \"9ec4132b-350d-4ae8-9f11-38218dd0b07e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.301892 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.302099 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9zrq\" (UniqueName: \"kubernetes.io/projected/a85003e7-763f-4480-af83-0a827574dc25-kube-api-access-f9zrq\") pod \"olm-operator-6b444d44fb-jvr77\" (UID: \"a85003e7-763f-4480-af83-0a827574dc25\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.305628 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.305820 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:55.805789976 +0000 UTC m=+144.148899400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.306202 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.306497 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:55.806486305 +0000 UTC m=+144.149595729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.310767 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.319915 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58s8t\" (UniqueName: \"kubernetes.io/projected/91c0402b-e438-4e77-8a6d-2765d09030e0-kube-api-access-58s8t\") pod \"dns-operator-744455d44c-bzlvn\" (UID: \"91c0402b-e438-4e77-8a6d-2765d09030e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.344134 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-w9vpf"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.344649 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdr65\" (UniqueName: \"kubernetes.io/projected/6b36d62b-3186-4e2b-961e-0e3553f75036-kube-api-access-gdr65\") pod \"dns-default-spphm\" (UID: \"6b36d62b-3186-4e2b-961e-0e3553f75036\") " pod="openshift-dns/dns-default-spphm" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.378836 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d0d05a9-8584-4102-8f50-6c0e36923a3e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.392011 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zq9x\" (UniqueName: \"kubernetes.io/projected/8d0d05a9-8584-4102-8f50-6c0e36923a3e-kube-api-access-8zq9x\") pod \"ingress-operator-5b745b69d9-qpbx9\" (UID: \"8d0d05a9-8584-4102-8f50-6c0e36923a3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.408735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/045f9b17-672f-4c8f-b397-95edf297e34f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zgvlv\" (UID: \"045f9b17-672f-4c8f-b397-95edf297e34f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.415393 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.415968 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:55.915948438 +0000 UTC m=+144.259057862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.423789 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvhhl\" (UniqueName: \"kubernetes.io/projected/e91d3d88-7e80-4c0d-8c97-405ba9487fe7-kube-api-access-lvhhl\") pod \"package-server-manager-789f6589d5-6sp9g\" (UID: \"e91d3d88-7e80-4c0d-8c97-405ba9487fe7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.445084 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfa31788-625e-40c0-a671-90ff80e2f400-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zph9v\" (UID: \"bfa31788-625e-40c0-a671-90ff80e2f400\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.458385 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.463187 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" Nov 25 14:26:55 crc kubenswrapper[4796]: W1125 14:26:55.464343 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93145a04_d9cc_419c_aac9_a236aa357d00.slice/crio-f5673ac98c4f16b3d871902a73a833a7a181a7ca644f89f120fe5c60b0818187 WatchSource:0}: Error finding container f5673ac98c4f16b3d871902a73a833a7a181a7ca644f89f120fe5c60b0818187: Status 404 returned error can't find the container with id f5673ac98c4f16b3d871902a73a833a7a181a7ca644f89f120fe5c60b0818187 Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.467305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npdr5\" (UniqueName: \"kubernetes.io/projected/c9a628c8-c197-42af-a1e0-287e38308e8a-kube-api-access-npdr5\") pod \"kube-storage-version-migrator-operator-b67b599dd-jz265\" (UID: \"c9a628c8-c197-42af-a1e0-287e38308e8a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.478843 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.486813 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.489019 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdd4\" (UniqueName: \"kubernetes.io/projected/371efde1-ba39-4eed-93e4-743cb2e7d996-kube-api-access-prdd4\") pod \"machine-config-server-mtw8r\" (UID: \"371efde1-ba39-4eed-93e4-743cb2e7d996\") " pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.492856 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.502979 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhkrs\" (UniqueName: \"kubernetes.io/projected/3c639e36-21d6-4cda-8fb4-08c52ea849c7-kube-api-access-bhkrs\") pod \"service-ca-9c57cc56f-7rkrg\" (UID: \"3c639e36-21d6-4cda-8fb4-08c52ea849c7\") " pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.512919 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.516175 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x57qm" event={"ID":"fa025925-c61e-49ae-ba50-79f4a401a20f","Type":"ContainerStarted","Data":"ffebb67b6d763ec271de5766b70e9385a908588deb94c928286ce46dc7f830ba"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.516627 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.516916 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.016900375 +0000 UTC m=+144.360009799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.517133 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" event={"ID":"7b22dd74-4a14-454e-8b0d-9fdb57ce6653","Type":"ContainerStarted","Data":"08134d3b06c0effe36a66d8b2e21e2420dcec6a9ba689036428b4e2181a72baf"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.521115 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" event={"ID":"e561356f-4d50-4b6a-86f5-d7796e069802","Type":"ContainerStarted","Data":"977e53e6a778ff987c9e7b82e8bb600d7da3c2d9a8d409cef5b92ff011e9ae08"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.525497 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" event={"ID":"e4d591da-4385-4890-ab0d-1a1ee8d934ea","Type":"ContainerStarted","Data":"e0e159099b0cc5de4c0810bc207f5b2093cd55ae182ca7c0f85e7d907dbd7ebd"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.525543 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" event={"ID":"e4d591da-4385-4890-ab0d-1a1ee8d934ea","Type":"ContainerStarted","Data":"73946b8b4943e165b3f5461012ff9f1bb270b1135e3ab34ad09aee535af29e54"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.525644 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be17e585-a456-4306-9613-ac2498fc550c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j22d5\" (UID: \"be17e585-a456-4306-9613-ac2498fc550c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.528353 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" event={"ID":"0d8de494-9c7a-47e6-afa1-47007836acd8","Type":"ContainerStarted","Data":"e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.528384 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" event={"ID":"0d8de494-9c7a-47e6-afa1-47007836acd8","Type":"ContainerStarted","Data":"85acb8728a36217f493b15770a20bec981d51c3b5e0e888791d41b430ab40d80"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.530116 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" event={"ID":"453a1a57-5017-420d-b2e5-2fef1a7721f5","Type":"ContainerStarted","Data":"61dbded2cc72dcdd9d8371602bf558afb30e9e6281ec09fb9207f6ffdb9ca6f9"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.532267 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" event={"ID":"76da93ba-dcf4-4f52-982f-ce98a9718cc8","Type":"ContainerStarted","Data":"09a4846608331bc352acf3042e5691efc5141b2ce5a8575101acb8ac8aebead6"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.533700 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" event={"ID":"5991c579-d1dc-44d7-b62e-2465d9c2aa4b","Type":"ContainerStarted","Data":"c816420b6c8c025170d703a7e97da76d9f7b0437624242d5b15630ce88a5efdc"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.536128 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" event={"ID":"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba","Type":"ContainerStarted","Data":"d086aa89dba17e0bfaac4a05a7d4e62f5cd4889b8738ceaaf1ab860d08d7b36d"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.536170 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" event={"ID":"94ac9d8e-4a41-4ea2-aa9a-76a68c3aa2ba","Type":"ContainerStarted","Data":"b134f45eb90de8191c8a24bcff5c865d20e9b7536c1860784bfce45c3a35433e"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.537937 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w9vpf" event={"ID":"93145a04-d9cc-419c-aac9-a236aa357d00","Type":"ContainerStarted","Data":"f5673ac98c4f16b3d871902a73a833a7a181a7ca644f89f120fe5c60b0818187"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.538548 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" event={"ID":"8b75fc2c-7703-4bee-9e6b-6ea32511fc42","Type":"ContainerStarted","Data":"e480e82554d122c238f5e52ef172c4abd6ce85e85ba7b982886014a424488321"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.539407 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" event={"ID":"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2","Type":"ContainerStarted","Data":"045b2005d1923b6b7e645e73bc4782348e684afbf49b9bf3ace16788e6f573d3"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.539425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" event={"ID":"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2","Type":"ContainerStarted","Data":"0830c4ff034052cfef00c5ecf948b4d77441c374c8c90a0da77db1e6009ed51e"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.545451 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.547511 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tsl5t" event={"ID":"c239761f-ade6-47eb-8fa5-f5178577ccb1","Type":"ContainerStarted","Data":"51552cfc081d9a34c302684b245ff4b883fba504b9381228498fed64a03916ec"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.550477 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-c6rl5" event={"ID":"0162f2df-c29a-4c00-b445-67a9bae4c5ad","Type":"ContainerStarted","Data":"90f730721da4ce088ef6f42c625e2a2501d3988c6965b733dcc9bd91e3cd815f"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.555390 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" event={"ID":"345ca21d-184d-4326-b97e-976d4190ae2f","Type":"ContainerStarted","Data":"1bb5ef083aa2feb85e6b92273f4a96f4c190dd86618c6d7a92bcf3f281b7b1e8"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.555428 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" event={"ID":"345ca21d-184d-4326-b97e-976d4190ae2f","Type":"ContainerStarted","Data":"fb6e7486bf765391eb38a3a727e2bc7a370e3b5f6f2531d73e08276af2c4e498"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.557757 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zphxs\" (UniqueName: \"kubernetes.io/projected/a9d6e924-f8c3-4f0a-92f3-942e822e5fc5-kube-api-access-zphxs\") pod \"csi-hostpathplugin-w7tqs\" (UID: \"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5\") " pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.558601 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" event={"ID":"524a60f2-4fff-4571-9f11-99d5178fd2a3","Type":"ContainerStarted","Data":"97dbec55669e6814a65927540db345f14947c7b28510f4325b82601dc55c026a"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.558627 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" event={"ID":"524a60f2-4fff-4571-9f11-99d5178fd2a3","Type":"ContainerStarted","Data":"23edb0edbb3ad4271787c987b585ff39b24dc39b6833c6b164f5e8d85b4e2ba3"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.559728 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" event={"ID":"3d926d83-e3cc-4bf1-ba33-629f2c058590","Type":"ContainerStarted","Data":"2826c914c7b8efe17ac38efb0d521e20738e2d46b71e337a4ebd175c1892fb58"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.560984 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.561139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8gt\" (UniqueName: \"kubernetes.io/projected/979c3909-38ab-4fa1-9374-29d4ce969c8e-kube-api-access-2p8gt\") pod \"machine-config-controller-84d6567774-4g8pn\" (UID: \"979c3909-38ab-4fa1-9374-29d4ce969c8e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.565184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" event={"ID":"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac","Type":"ContainerStarted","Data":"352489902a14f7f9eab138d66b79c76d3ae6ad754b339eae13414fa3d1a70f72"} Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.569273 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.578523 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c8b7v"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.581408 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t9g7\" (UniqueName: \"kubernetes.io/projected/63aeb87d-a8b1-40a5-95b9-e224d1bd968f-kube-api-access-7t9g7\") pod \"control-plane-machine-set-operator-78cbb6b69f-64jzs\" (UID: \"63aeb87d-a8b1-40a5-95b9-e224d1bd968f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.586129 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-spphm" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.596145 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.603406 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkvk\" (UniqueName: \"kubernetes.io/projected/20ee15cc-da7a-4651-8ec6-a31683503069-kube-api-access-6qkvk\") pod \"ingress-canary-gbdnn\" (UID: \"20ee15cc-da7a-4651-8ec6-a31683503069\") " pod="openshift-ingress-canary/ingress-canary-gbdnn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.605308 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.617686 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.618252 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.618265 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.118245471 +0000 UTC m=+144.461354895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.632994 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mtw8r" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.633472 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7b9d\" (UniqueName: \"kubernetes.io/projected/f667ef84-04a1-4c76-95d7-75648124470f-kube-api-access-l7b9d\") pod \"catalog-operator-68c6474976-wdh6v\" (UID: \"f667ef84-04a1-4c76-95d7-75648124470f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.644283 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.653853 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gbdnn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.719349 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.719645 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.219633959 +0000 UTC m=+144.562743383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.798976 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.799355 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lndl"] Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.805154 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.820857 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.821102 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.321080398 +0000 UTC m=+144.664189822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.821350 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.821912 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.32190256 +0000 UTC m=+144.665011984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.824337 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.831444 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:55 crc kubenswrapper[4796]: I1125 14:26:55.924724 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:55 crc kubenswrapper[4796]: E1125 14:26:55.925095 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.425079146 +0000 UTC m=+144.768188560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.027516 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.027987 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.527969683 +0000 UTC m=+144.871079097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.133863 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.134327 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.634310483 +0000 UTC m=+144.977419907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.164183 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.187543 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-spphm"] Nov 25 14:26:56 crc kubenswrapper[4796]: W1125 14:26:56.193701 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9a628c8_c197_42af_a1e0_287e38308e8a.slice/crio-fa64123cbe253216c92078ca7fd05e2b010e2c04a0cd7bcc1b930730e529b8bd WatchSource:0}: Error finding container fa64123cbe253216c92078ca7fd05e2b010e2c04a0cd7bcc1b930730e529b8bd: Status 404 returned error can't find the container with id fa64123cbe253216c92078ca7fd05e2b010e2c04a0cd7bcc1b930730e529b8bd Nov 25 14:26:56 crc kubenswrapper[4796]: W1125 14:26:56.226858 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b36d62b_3186_4e2b_961e_0e3553f75036.slice/crio-c7bef7390c3fb52c7a48762c9c20470f4d2fb48a68eecafa1ba8289b9b326119 WatchSource:0}: Error finding container c7bef7390c3fb52c7a48762c9c20470f4d2fb48a68eecafa1ba8289b9b326119: Status 404 returned error can't find the container with id c7bef7390c3fb52c7a48762c9c20470f4d2fb48a68eecafa1ba8289b9b326119 Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.239401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.239859 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.739848123 +0000 UTC m=+145.082957547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.251244 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.266173 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7rkrg"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.301874 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.315532 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.326994 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-njqdf" podStartSLOduration=122.326957059 podStartE2EDuration="2m2.326957059s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:56.299111456 +0000 UTC m=+144.642220890" watchObservedRunningTime="2025-11-25 14:26:56.326957059 +0000 UTC m=+144.670066483" Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.340131 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.340466 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.840449809 +0000 UTC m=+145.183559233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.442457 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.442815 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:56.942803593 +0000 UTC m=+145.285913017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.499314 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.501025 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.503773 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.505744 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bzlvn"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.507636 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gbdnn"] Nov 25 14:26:56 crc kubenswrapper[4796]: W1125 14:26:56.517755 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f72561b_ab6e_4eb5_b855_bbafd724ce5f.slice/crio-83c56fb4f7ee19c6369308d7c23b3bd77447ccf110c1cc87e4ee4435a84c244f WatchSource:0}: Error finding container 83c56fb4f7ee19c6369308d7c23b3bd77447ccf110c1cc87e4ee4435a84c244f: Status 404 returned error can't find the container with id 83c56fb4f7ee19c6369308d7c23b3bd77447ccf110c1cc87e4ee4435a84c244f Nov 25 14:26:56 crc kubenswrapper[4796]: W1125 14:26:56.520397 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod045f9b17_672f_4c8f_b397_95edf297e34f.slice/crio-e75381d77756b853bfec42e616602f53eb969259ae3bbf19c7f0ffe68cbc5122 WatchSource:0}: Error finding container e75381d77756b853bfec42e616602f53eb969259ae3bbf19c7f0ffe68cbc5122: Status 404 returned error can't find the container with id e75381d77756b853bfec42e616602f53eb969259ae3bbf19c7f0ffe68cbc5122 Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.543259 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.543415 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.043391149 +0000 UTC m=+145.386500573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.543565 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.543846 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.043839361 +0000 UTC m=+145.386948785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.578136 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" event={"ID":"453a1a57-5017-420d-b2e5-2fef1a7721f5","Type":"ContainerStarted","Data":"e8231cbbafe31c46c6bc554b9ebe6aa1474ee48a20ec476de5f6ccdd4808803d"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.579934 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" event={"ID":"bfa31788-625e-40c0-a671-90ff80e2f400","Type":"ContainerStarted","Data":"7164307913c3c8c83fd40bb6aa3c61bdd5bed4357dbf443482257f2a72953088"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.614626 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" event={"ID":"91c0402b-e438-4e77-8a6d-2765d09030e0","Type":"ContainerStarted","Data":"f8a8993ce5c432e2521001255086d7b4da541a648bc5a650533dd240e517ac17"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.617210 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" event={"ID":"8d0d05a9-8584-4102-8f50-6c0e36923a3e","Type":"ContainerStarted","Data":"7d046d5ae4323bbd47497fca6ab630ec0a72e2ea22895e27be74bed5843ff45d"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.617920 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" event={"ID":"3c639e36-21d6-4cda-8fb4-08c52ea849c7","Type":"ContainerStarted","Data":"e4e78be2d9d741c505e4aa3891864f2f977510089f8f75b4833dd242ed2ef965"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.618709 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw" event={"ID":"4f72561b-ab6e-4eb5-b855-bbafd724ce5f","Type":"ContainerStarted","Data":"83c56fb4f7ee19c6369308d7c23b3bd77447ccf110c1cc87e4ee4435a84c244f"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.625534 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gbdnn" event={"ID":"20ee15cc-da7a-4651-8ec6-a31683503069","Type":"ContainerStarted","Data":"56dd529cb94abce10d6ef42e519784d8521a71b8c79f61045cd5668697ac396c"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.632118 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.633103 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" event={"ID":"0c69a4aa-fd90-439f-8d5a-f402e0dcd0b2","Type":"ContainerStarted","Data":"da3c838ff7feb655758a152a4521f14b9f596bf4825d6f8660a97fd367c7cf65"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.644264 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" event={"ID":"adc5632d-1700-4ff8-a1db-7e53ee263222","Type":"ContainerStarted","Data":"64ac4f5b38034dfe2200b516eb276fc71c8571a4cde2f2bf7440fbb8b6cbbed4"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.647027 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.647300 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.147282304 +0000 UTC m=+145.490391728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.647339 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.647633 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.147626933 +0000 UTC m=+145.490736357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.649300 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w7tqs"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.658125 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.658165 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-spphm" event={"ID":"6b36d62b-3186-4e2b-961e-0e3553f75036","Type":"ContainerStarted","Data":"c7bef7390c3fb52c7a48762c9c20470f4d2fb48a68eecafa1ba8289b9b326119"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.659955 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.660506 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" event={"ID":"506a2195-43f9-4a3a-ad03-ad55166c7e03","Type":"ContainerStarted","Data":"dd157b2794ec5aeafa677e53162d3ecaebeacc4c57a1f4000c8956b3de649e54"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.663219 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" event={"ID":"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9","Type":"ContainerStarted","Data":"aacd58cece13152ee859d07ac4c4782c53d23a2be631a53c9c498c292d7ec18c"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.664854 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x57qm" event={"ID":"fa025925-c61e-49ae-ba50-79f4a401a20f","Type":"ContainerStarted","Data":"7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.667194 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" event={"ID":"76da93ba-dcf4-4f52-982f-ce98a9718cc8","Type":"ContainerStarted","Data":"17768babcdf83fc3b3730d7427490272737fb4a258e70d95118fce3c42527648"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.668230 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.673967 4796 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dr5s9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.674014 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" podUID="76da93ba-dcf4-4f52-982f-ce98a9718cc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.680155 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" event={"ID":"fab48abd-b847-4828-99f2-e9d7d3312e94","Type":"ContainerStarted","Data":"c6f93103c8a2edb3fd5a71e11ade6c4c55a83d39f2958c995be15870ca6b1930"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.696325 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mtw8r" event={"ID":"371efde1-ba39-4eed-93e4-743cb2e7d996","Type":"ContainerStarted","Data":"89188c86a5f6c94b20d94d847ad92b45382bc20b730015965f0db15732eb2dd0"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.706250 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" event={"ID":"e561356f-4d50-4b6a-86f5-d7796e069802","Type":"ContainerStarted","Data":"3b7715344b6fea658233311d6a9c55cd39e2880b3e18fe7ea88816d90f801528"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.707413 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" event={"ID":"e91d3d88-7e80-4c0d-8c97-405ba9487fe7","Type":"ContainerStarted","Data":"23f0b4ea6f8b6d076c6044052d3dd154b48372e34c13a4368d288d65e88a37bd"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.708516 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" event={"ID":"c9a628c8-c197-42af-a1e0-287e38308e8a","Type":"ContainerStarted","Data":"fa64123cbe253216c92078ca7fd05e2b010e2c04a0cd7bcc1b930730e529b8bd"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.712502 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.713168 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" event={"ID":"786d9482-4e90-4a71-abf3-40bf3101fc86","Type":"ContainerStarted","Data":"21d4aaa4386bcfe7d11563cf80472e053b32cace7ee81b79db3fd645675c8780"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.715354 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" event={"ID":"045f9b17-672f-4c8f-b397-95edf297e34f","Type":"ContainerStarted","Data":"e75381d77756b853bfec42e616602f53eb969259ae3bbf19c7f0ffe68cbc5122"} Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.715498 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.719173 4796 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-6tp55 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.719216 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" podUID="0d8de494-9c7a-47e6-afa1-47007836acd8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.719437 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.727977 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v"] Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.747844 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:56 crc kubenswrapper[4796]: W1125 14:26:56.749306 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d6e924_f8c3_4f0a_92f3_942e822e5fc5.slice/crio-3644d14da0140b089d4be2bd35e7c85eb7c7b14e61fab50430eee27abf8df6f8 WatchSource:0}: Error finding container 3644d14da0140b089d4be2bd35e7c85eb7c7b14e61fab50430eee27abf8df6f8: Status 404 returned error can't find the container with id 3644d14da0140b089d4be2bd35e7c85eb7c7b14e61fab50430eee27abf8df6f8 Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.750364 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.250341126 +0000 UTC m=+145.593450560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: W1125 14:26:56.751980 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod979c3909_38ab_4fa1_9374_29d4ce969c8e.slice/crio-8a2e0ebe1cb84bfc75d39b05a0702bfa7481ce597cb249763ad61e5eeeca0880 WatchSource:0}: Error finding container 8a2e0ebe1cb84bfc75d39b05a0702bfa7481ce597cb249763ad61e5eeeca0880: Status 404 returned error can't find the container with id 8a2e0ebe1cb84bfc75d39b05a0702bfa7481ce597cb249763ad61e5eeeca0880 Nov 25 14:26:56 crc kubenswrapper[4796]: W1125 14:26:56.754058 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe17e585_a456_4306_9613_ac2498fc550c.slice/crio-20bdf1814a9a92affa4faa2f796bf2eacbb69b19a83383164b83cb886959f893 WatchSource:0}: Error finding container 20bdf1814a9a92affa4faa2f796bf2eacbb69b19a83383164b83cb886959f893: Status 404 returned error can't find the container with id 20bdf1814a9a92affa4faa2f796bf2eacbb69b19a83383164b83cb886959f893 Nov 25 14:26:56 crc kubenswrapper[4796]: W1125 14:26:56.774725 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63aeb87d_a8b1_40a5_95b9_e224d1bd968f.slice/crio-b4e768bda2fb01e54d1b3e7c84545412b18985720334321ee8699961d772f7dc WatchSource:0}: Error finding container b4e768bda2fb01e54d1b3e7c84545412b18985720334321ee8699961d772f7dc: Status 404 returned error can't find the container with id b4e768bda2fb01e54d1b3e7c84545412b18985720334321ee8699961d772f7dc Nov 25 14:26:56 crc kubenswrapper[4796]: W1125 14:26:56.784390 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf667ef84_04a1_4c76_95d7_75648124470f.slice/crio-15b15f769043739b4044c4158598667f3aeee620d3bcb9b51accd4a058c43a66 WatchSource:0}: Error finding container 15b15f769043739b4044c4158598667f3aeee620d3bcb9b51accd4a058c43a66: Status 404 returned error can't find the container with id 15b15f769043739b4044c4158598667f3aeee620d3bcb9b51accd4a058c43a66 Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.817274 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lnvrv" podStartSLOduration=122.817256604 podStartE2EDuration="2m2.817256604s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:56.816073242 +0000 UTC m=+145.159182666" watchObservedRunningTime="2025-11-25 14:26:56.817256604 +0000 UTC m=+145.160366028" Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.853793 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.858106 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.358068314 +0000 UTC m=+145.701177748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.955717 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.955859 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.455819354 +0000 UTC m=+145.798928778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:56 crc kubenswrapper[4796]: I1125 14:26:56.956273 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:56 crc kubenswrapper[4796]: E1125 14:26:56.956661 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.456647097 +0000 UTC m=+145.799756521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.056858 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.057206 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.557185142 +0000 UTC m=+145.900294566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.157754 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.158223 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.65820356 +0000 UTC m=+146.001312994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.259136 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.259720 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.759547957 +0000 UTC m=+146.102657411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.361264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.361704 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.861683684 +0000 UTC m=+146.204793118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.377568 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" podStartSLOduration=122.377544537 podStartE2EDuration="2m2.377544537s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:57.376470459 +0000 UTC m=+145.719579923" watchObservedRunningTime="2025-11-25 14:26:57.377544537 +0000 UTC m=+145.720653991" Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.418146 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qj88x" podStartSLOduration=123.418128761 podStartE2EDuration="2m3.418128761s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:57.415783179 +0000 UTC m=+145.758892603" watchObservedRunningTime="2025-11-25 14:26:57.418128761 +0000 UTC m=+145.761238185" Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.462365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.462537 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.962512457 +0000 UTC m=+146.305621881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.462739 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.463009 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:57.96299977 +0000 UTC m=+146.306109264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.463666 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fhbvb" podStartSLOduration=123.463645677 podStartE2EDuration="2m3.463645677s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:57.461196721 +0000 UTC m=+145.804306165" watchObservedRunningTime="2025-11-25 14:26:57.463645677 +0000 UTC m=+145.806755111" Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.501851 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" podStartSLOduration=123.501828367 podStartE2EDuration="2m3.501828367s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:57.501462917 +0000 UTC m=+145.844572371" watchObservedRunningTime="2025-11-25 14:26:57.501828367 +0000 UTC m=+145.844937801" Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.563178 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.563346 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.063322909 +0000 UTC m=+146.406432333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.563475 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.563936 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.063919475 +0000 UTC m=+146.407028909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.664073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.664290 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.164230684 +0000 UTC m=+146.507340118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.664400 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.664727 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.164715737 +0000 UTC m=+146.507825161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.724076 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tsl5t" event={"ID":"c239761f-ade6-47eb-8fa5-f5178577ccb1","Type":"ContainerStarted","Data":"25d26c6f8706cd0ffddc4c00aa069fdbacfb13ff2cc5dcf400d59d6e249a661a"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.728471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" event={"ID":"c9a628c8-c197-42af-a1e0-287e38308e8a","Type":"ContainerStarted","Data":"43e147cba57c3a94c6411c6f1f455969d4162e1422aee11c388eb6e669cc5503"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.729714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" event={"ID":"f667ef84-04a1-4c76-95d7-75648124470f","Type":"ContainerStarted","Data":"15b15f769043739b4044c4158598667f3aeee620d3bcb9b51accd4a058c43a66"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.731758 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" event={"ID":"8b75fc2c-7703-4bee-9e6b-6ea32511fc42","Type":"ContainerStarted","Data":"c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.734733 4796 generic.go:334] "Generic (PLEG): container finished" podID="453a1a57-5017-420d-b2e5-2fef1a7721f5" containerID="e8231cbbafe31c46c6bc554b9ebe6aa1474ee48a20ec476de5f6ccdd4808803d" exitCode=0 Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.734806 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" event={"ID":"453a1a57-5017-420d-b2e5-2fef1a7721f5","Type":"ContainerDied","Data":"e8231cbbafe31c46c6bc554b9ebe6aa1474ee48a20ec476de5f6ccdd4808803d"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.736101 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" event={"ID":"be17e585-a456-4306-9613-ac2498fc550c","Type":"ContainerStarted","Data":"20bdf1814a9a92affa4faa2f796bf2eacbb69b19a83383164b83cb886959f893"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.746265 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-w9vpf" event={"ID":"93145a04-d9cc-419c-aac9-a236aa357d00","Type":"ContainerStarted","Data":"2090656d7ae405cd136a229e57aa47337da30794b1645d1f98c1797bcd65949b"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.758236 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" event={"ID":"9ec4132b-350d-4ae8-9f11-38218dd0b07e","Type":"ContainerStarted","Data":"8b5982a14de84a3c02b2ec3b65b03ab871f1af03dc5f52a955972408c1e787c9"} Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.766964 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.266934227 +0000 UTC m=+146.610043661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.766807 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.767554 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.770513 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" event={"ID":"979c3909-38ab-4fa1-9374-29d4ce969c8e","Type":"ContainerStarted","Data":"8a2e0ebe1cb84bfc75d39b05a0702bfa7481ce597cb249763ad61e5eeeca0880"} Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.771257 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.271235372 +0000 UTC m=+146.614344916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.788178 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" event={"ID":"fab48abd-b847-4828-99f2-e9d7d3312e94","Type":"ContainerStarted","Data":"c7e1f60ff6e8f6f667659e5dc9896c30689bb50f7e829fc825e978d0e3736b5d"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.803216 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" event={"ID":"adc5632d-1700-4ff8-a1db-7e53ee263222","Type":"ContainerStarted","Data":"0e3455a0e16fcaa317cfed78678a38e647621ae60330ae0590accdb97830e070"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.827186 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" event={"ID":"786d9482-4e90-4a71-abf3-40bf3101fc86","Type":"ContainerStarted","Data":"860de122a3f9cb17d5897afdbff27c9249bcfe24896b08df93abb7286e95737b"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.844214 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" event={"ID":"3d926d83-e3cc-4bf1-ba33-629f2c058590","Type":"ContainerStarted","Data":"03db6124dbc1e464e00f4a89d73458276757ebf92e4ec708b301a2f0a16bd6cc"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.866852 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" event={"ID":"5991c579-d1dc-44d7-b62e-2465d9c2aa4b","Type":"ContainerStarted","Data":"725184d3e47fcf12957724215b1281bce7daab2ed3031e672be6123274127100"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.870314 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.870644 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.370627456 +0000 UTC m=+146.713736890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.871710 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" event={"ID":"63aeb87d-a8b1-40a5-95b9-e224d1bd968f","Type":"ContainerStarted","Data":"b4e768bda2fb01e54d1b3e7c84545412b18985720334321ee8699961d772f7dc"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.880738 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" event={"ID":"524a60f2-4fff-4571-9f11-99d5178fd2a3","Type":"ContainerStarted","Data":"07c07efc556304b14172b3aa452807cb76ad7149215afcaf0d789b383967fb65"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.884085 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" event={"ID":"a85003e7-763f-4480-af83-0a827574dc25","Type":"ContainerStarted","Data":"dc302967b10576ab38c9adf17789824f94a60f5fd043fec220ac6bec1d94bb26"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.885538 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" event={"ID":"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5","Type":"ContainerStarted","Data":"3644d14da0140b089d4be2bd35e7c85eb7c7b14e61fab50430eee27abf8df6f8"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.889983 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-c6rl5" event={"ID":"0162f2df-c29a-4c00-b445-67a9bae4c5ad","Type":"ContainerStarted","Data":"7954768d08aaa1987ca72a26a647f467fea4f47d9a2673ea894058a223573cd4"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.891218 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" event={"ID":"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac","Type":"ContainerStarted","Data":"d968e80a152c17eaf377e5489afb8d44f2947762e8563d0366ea56db74717c2b"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.896550 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" event={"ID":"506a2195-43f9-4a3a-ad03-ad55166c7e03","Type":"ContainerStarted","Data":"b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.903755 4796 generic.go:334] "Generic (PLEG): container finished" podID="7b22dd74-4a14-454e-8b0d-9fdb57ce6653" containerID="4daa7ed6526368e522b9315a060f3289bc40cf05a9aefd60243981783ec72281" exitCode=0 Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.904502 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" event={"ID":"7b22dd74-4a14-454e-8b0d-9fdb57ce6653","Type":"ContainerDied","Data":"4daa7ed6526368e522b9315a060f3289bc40cf05a9aefd60243981783ec72281"} Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.906333 4796 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dr5s9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.906363 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" podUID="76da93ba-dcf4-4f52-982f-ce98a9718cc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.906638 4796 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-6tp55 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.906752 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" podUID="0d8de494-9c7a-47e6-afa1-47007836acd8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.941531 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x57qm" podStartSLOduration=123.94151507 podStartE2EDuration="2m3.94151507s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:57.925439381 +0000 UTC m=+146.268548805" watchObservedRunningTime="2025-11-25 14:26:57.94151507 +0000 UTC m=+146.284624494" Nov 25 14:26:57 crc kubenswrapper[4796]: E1125 14:26:57.973332 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.473315409 +0000 UTC m=+146.816424953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:57 crc kubenswrapper[4796]: I1125 14:26:57.973398 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.075492 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.075678 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.575655082 +0000 UTC m=+146.918764526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.075937 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.076412 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.576398552 +0000 UTC m=+146.919507986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.177315 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.177527 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.677489422 +0000 UTC m=+147.020598856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.177657 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.178023 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.678006325 +0000 UTC m=+147.021115759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.279317 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.279746 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.779728602 +0000 UTC m=+147.122838046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.381766 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.382176 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.882157219 +0000 UTC m=+147.225266643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.484296 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.484445 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.984417009 +0000 UTC m=+147.327526423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.484756 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.485102 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:58.985089657 +0000 UTC m=+147.328199081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.585725 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.585861 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.085840888 +0000 UTC m=+147.428950322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.586084 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.586457 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.086447755 +0000 UTC m=+147.429557179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.687480 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.687740 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.187682888 +0000 UTC m=+147.530792322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.687921 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.688239 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.188223182 +0000 UTC m=+147.531332626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.788836 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.789182 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.289163598 +0000 UTC m=+147.632273022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.890703 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.891045 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.391033459 +0000 UTC m=+147.734142883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.908541 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" event={"ID":"63aeb87d-a8b1-40a5-95b9-e224d1bd968f","Type":"ContainerStarted","Data":"1054bc0a10532c3ed0b322be3ff4160541f7edc816a660546d3179310a69e834"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.909487 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" event={"ID":"8d0d05a9-8584-4102-8f50-6c0e36923a3e","Type":"ContainerStarted","Data":"8877ab35722a17d0a9c74df9620d32869527dd9a344a91e60427d26fc4914d84"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.910223 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" event={"ID":"b4b66ff6-b653-4a4a-9d92-b16b94d4d4e9","Type":"ContainerStarted","Data":"473b98b0837a0a0c292119f63e79698de4f627f59916de99c4a8c793e2610235"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.911355 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" event={"ID":"f667ef84-04a1-4c76-95d7-75648124470f","Type":"ContainerStarted","Data":"c09c23c806029b467ca84f8c5bf68e69d5297e3e8169a819cede509351197537"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.912445 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.913895 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" event={"ID":"979c3909-38ab-4fa1-9374-29d4ce969c8e","Type":"ContainerStarted","Data":"3eb69d12eee989acc30f61b4d6f896566373d8b9f73fefb3133f1c875ef4ffb5"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.914062 4796 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wdh6v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.914094 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" podUID="f667ef84-04a1-4c76-95d7-75648124470f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.915563 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" event={"ID":"453a1a57-5017-420d-b2e5-2fef1a7721f5","Type":"ContainerStarted","Data":"198ae9843c10595aa4743f7e8231106c77de57296e90ccd271bc0ab28bff655a"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.916826 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw" event={"ID":"4f72561b-ab6e-4eb5-b855-bbafd724ce5f","Type":"ContainerStarted","Data":"115b5f90133013ae67e8374ed8ff748c19956e026fe24c83ac0729fb1fa7a284"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.918180 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" event={"ID":"91c0402b-e438-4e77-8a6d-2765d09030e0","Type":"ContainerStarted","Data":"eceb22b3fd5350117bcb9df908c9519759b71910b1cd1dc932bad4799740d8a4"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.919465 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mtw8r" event={"ID":"371efde1-ba39-4eed-93e4-743cb2e7d996","Type":"ContainerStarted","Data":"1f4fbf9db8d5f6bb1c4d52bb8e7f5190c52a7259d23f00645c5e8fd34df69a66"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.921291 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" event={"ID":"9ec4132b-350d-4ae8-9f11-38218dd0b07e","Type":"ContainerStarted","Data":"4d36b19e71181541eea92c9e02f420e879ed4de4af63e8482132416314bc1425"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.922785 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" event={"ID":"e91d3d88-7e80-4c0d-8c97-405ba9487fe7","Type":"ContainerStarted","Data":"95a0152c887dcfb27d109b17acfedc507b139410627356372b2659c452103f75"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.924616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-spphm" event={"ID":"6b36d62b-3186-4e2b-961e-0e3553f75036","Type":"ContainerStarted","Data":"857eaf4510a3213a035d9c36e7e2cc89e736f2918c94ee6fb3259145466e101c"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.925942 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gbdnn" event={"ID":"20ee15cc-da7a-4651-8ec6-a31683503069","Type":"ContainerStarted","Data":"c90def56cd68c179f4146b5de011248b7e1fb87908acf604305c8d2748a925f2"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.927214 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" event={"ID":"be17e585-a456-4306-9613-ac2498fc550c","Type":"ContainerStarted","Data":"3ae5fb96a8a55c1f4e3eaf9f195acdb0d040eb5aeb8d83be340fac107781e5da"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.928911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" event={"ID":"045f9b17-672f-4c8f-b397-95edf297e34f","Type":"ContainerStarted","Data":"e215df82dbcc13a6b2954313a55e6dba3269f3e1f85a89299b7b5ec646a84f3d"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.930610 4796 generic.go:334] "Generic (PLEG): container finished" podID="5991c579-d1dc-44d7-b62e-2465d9c2aa4b" containerID="725184d3e47fcf12957724215b1281bce7daab2ed3031e672be6123274127100" exitCode=0 Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.930825 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" event={"ID":"5991c579-d1dc-44d7-b62e-2465d9c2aa4b","Type":"ContainerDied","Data":"725184d3e47fcf12957724215b1281bce7daab2ed3031e672be6123274127100"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.933691 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" event={"ID":"bfa31788-625e-40c0-a671-90ff80e2f400","Type":"ContainerStarted","Data":"dec099ac14fa82931ee66565578f3fdf4acef940280c4b65ee049f4f92902ba5"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.934737 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" event={"ID":"3c639e36-21d6-4cda-8fb4-08c52ea849c7","Type":"ContainerStarted","Data":"8d514788d19941388493f44a5fdb053f14e8cad9fdac995d616d081d72e1e649"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.937034 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" event={"ID":"a85003e7-763f-4480-af83-0a827574dc25","Type":"ContainerStarted","Data":"aea8b053b997c2abb1c155995d0c63e45400b18405729585c17478e8da687396"} Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940100 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940124 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940135 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940144 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tsl5t" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940204 4796 patch_prober.go:28] interesting pod/console-operator-58897d9998-w9vpf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940240 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9lndl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940283 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" podUID="506a2195-43f9-4a3a-ad03-ad55166c7e03" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940248 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w9vpf" podUID="93145a04-d9cc-419c-aac9-a236aa357d00" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940450 4796 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dr5s9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" start-of-body= Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940485 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" podUID="76da93ba-dcf4-4f52-982f-ce98a9718cc8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": dial tcp 10.217.0.26:6443: connect: connection refused" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940513 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-tsl5t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940518 4796 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vbgn5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940589 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tsl5t" podUID="c239761f-ade6-47eb-8fa5-f5178577ccb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.940662 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" podUID="8b75fc2c-7703-4bee-9e6b-6ea32511fc42" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.941952 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qmn8p" podStartSLOduration=124.941939959 podStartE2EDuration="2m4.941939959s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:57.940457381 +0000 UTC m=+146.283566795" watchObservedRunningTime="2025-11-25 14:26:58.941939959 +0000 UTC m=+147.285049383" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.943551 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" podStartSLOduration=123.943539401 podStartE2EDuration="2m3.943539401s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:58.943280664 +0000 UTC m=+147.286390088" watchObservedRunningTime="2025-11-25 14:26:58.943539401 +0000 UTC m=+147.286648825" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.971472 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-c6rl5" podStartSLOduration=124.971456987 podStartE2EDuration="2m4.971456987s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:58.970144561 +0000 UTC m=+147.313254005" watchObservedRunningTime="2025-11-25 14:26:58.971456987 +0000 UTC m=+147.314566411" Nov 25 14:26:58 crc kubenswrapper[4796]: I1125 14:26:58.992645 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:58 crc kubenswrapper[4796]: E1125 14:26:58.995895 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.495872049 +0000 UTC m=+147.838981483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.006860 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-w9vpf" podStartSLOduration=125.006842141 podStartE2EDuration="2m5.006842141s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:58.997260526 +0000 UTC m=+147.340369950" watchObservedRunningTime="2025-11-25 14:26:59.006842141 +0000 UTC m=+147.349951565" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.053146 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" podStartSLOduration=125.053126358 podStartE2EDuration="2m5.053126358s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:59.052439799 +0000 UTC m=+147.395549243" watchObservedRunningTime="2025-11-25 14:26:59.053126358 +0000 UTC m=+147.396235782" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.054505 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jz265" podStartSLOduration=125.054497394 podStartE2EDuration="2m5.054497394s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:59.026236739 +0000 UTC m=+147.369346163" watchObservedRunningTime="2025-11-25 14:26:59.054497394 +0000 UTC m=+147.397606818" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.083314 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bxtz9" podStartSLOduration=125.083300354 podStartE2EDuration="2m5.083300354s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:59.0827927 +0000 UTC m=+147.425902134" watchObservedRunningTime="2025-11-25 14:26:59.083300354 +0000 UTC m=+147.426409778" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.097303 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.097777 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.59776017 +0000 UTC m=+147.940869594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.162197 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tsl5t" podStartSLOduration=125.162181001 podStartE2EDuration="2m5.162181001s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:59.160235939 +0000 UTC m=+147.503345373" watchObservedRunningTime="2025-11-25 14:26:59.162181001 +0000 UTC m=+147.505290425" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.164235 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mtw8r" podStartSLOduration=7.164226015 podStartE2EDuration="7.164226015s" podCreationTimestamp="2025-11-25 14:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:59.136762282 +0000 UTC m=+147.479871706" watchObservedRunningTime="2025-11-25 14:26:59.164226015 +0000 UTC m=+147.507335439" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.172727 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.175687 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.175744 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.187198 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" podStartSLOduration=124.187184488 podStartE2EDuration="2m4.187184488s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:59.186964222 +0000 UTC m=+147.530073656" watchObservedRunningTime="2025-11-25 14:26:59.187184488 +0000 UTC m=+147.530293912" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.198900 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.199034 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.699006424 +0000 UTC m=+148.042115858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.199166 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.199552 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.699540379 +0000 UTC m=+148.042649803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.233369 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" podStartSLOduration=125.233346891 podStartE2EDuration="2m5.233346891s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:59.225775789 +0000 UTC m=+147.568885203" watchObservedRunningTime="2025-11-25 14:26:59.233346891 +0000 UTC m=+147.576456305" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.301139 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.301435 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.801421169 +0000 UTC m=+148.144530593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.402422 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.402800 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:26:59.902785086 +0000 UTC m=+148.245894520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.503178 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.503378 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.003354332 +0000 UTC m=+148.346463756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.503504 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.503866 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.003859946 +0000 UTC m=+148.346969370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.605141 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.605272 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.105245154 +0000 UTC m=+148.448354578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.605326 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.605697 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.105686215 +0000 UTC m=+148.448795639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.706430 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.706674 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.206636501 +0000 UTC m=+148.549745945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.707724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.708499 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.208228874 +0000 UTC m=+148.551338378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.809779 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.809947 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.30991929 +0000 UTC m=+148.653028714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.810377 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.810741 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.310731161 +0000 UTC m=+148.653840585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.911503 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:26:59 crc kubenswrapper[4796]: E1125 14:26:59.911881 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.411861042 +0000 UTC m=+148.754970476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.942336 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" event={"ID":"e91d3d88-7e80-4c0d-8c97-405ba9487fe7","Type":"ContainerStarted","Data":"4e988e8364246ccb06aacad58c329ee7924095289187761a652f769721ddc9ac"} Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.944262 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" event={"ID":"979c3909-38ab-4fa1-9374-29d4ce969c8e","Type":"ContainerStarted","Data":"e9887d62d168933d311c0e84b6dde43923e378e5124c159f3af56be2e10197a0"} Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.945810 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" event={"ID":"adc5632d-1700-4ff8-a1db-7e53ee263222","Type":"ContainerStarted","Data":"a7549cbf328cc34aee2492965aa5debe9b2014595bcab27d727b5d1e0a1492a9"} Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.952828 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" event={"ID":"67c0424c-b0ff-417d-bf4c-1cdcadd1ebac","Type":"ContainerStarted","Data":"632af0ccf346e8b3916443e3258b781c402fdb639cf1d16458e69bbd40eac225"} Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.955734 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" event={"ID":"786d9482-4e90-4a71-abf3-40bf3101fc86","Type":"ContainerStarted","Data":"6304d2ac6fccc4b940ba45889cb70d86da7a1b33d851c2ea4e21b9f0f44660f6"} Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.969377 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vkwpv" podStartSLOduration=124.969363919 podStartE2EDuration="2m4.969363919s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:59.968599718 +0000 UTC m=+148.311709142" watchObservedRunningTime="2025-11-25 14:26:59.969363919 +0000 UTC m=+148.312473343" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.972044 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jjf82" podStartSLOduration=125.97203447 podStartE2EDuration="2m5.97203447s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:26:59.257541017 +0000 UTC m=+147.600650441" watchObservedRunningTime="2025-11-25 14:26:59.97203447 +0000 UTC m=+148.315143884" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" event={"ID":"5991c579-d1dc-44d7-b62e-2465d9c2aa4b","Type":"ContainerStarted","Data":"db64b0b3ae84058adbed076db2e5ac58cce75c704c1b144743e7786a22ef04d2"} Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977224 4796 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b72mt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977432 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977448 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" podUID="9ec4132b-350d-4ae8-9f11-38218dd0b07e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977723 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977735 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-tsl5t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977807 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tsl5t" podUID="c239761f-ade6-47eb-8fa5-f5178577ccb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977866 4796 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wdh6v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977894 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9lndl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977866 4796 patch_prober.go:28] interesting pod/console-operator-58897d9998-w9vpf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977924 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" podUID="506a2195-43f9-4a3a-ad03-ad55166c7e03" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977947 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-w9vpf" podUID="93145a04-d9cc-419c-aac9-a236aa357d00" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.977911 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" podUID="f667ef84-04a1-4c76-95d7-75648124470f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.978166 4796 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jvr77 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.978195 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" podUID="a85003e7-763f-4480-af83-0a827574dc25" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.979281 4796 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vbgn5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 25 14:26:59 crc kubenswrapper[4796]: I1125 14:26:59.979308 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" podUID="8b75fc2c-7703-4bee-9e6b-6ea32511fc42" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.009848 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lvdx5" podStartSLOduration=125.009831989 podStartE2EDuration="2m5.009831989s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.008598867 +0000 UTC m=+148.351708291" watchObservedRunningTime="2025-11-25 14:27:00.009831989 +0000 UTC m=+148.352941403" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.013797 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.017279 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.517264197 +0000 UTC m=+148.860373611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.034494 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8b7v" podStartSLOduration=125.034476357 podStartE2EDuration="2m5.034476357s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.033314177 +0000 UTC m=+148.376423601" watchObservedRunningTime="2025-11-25 14:27:00.034476357 +0000 UTC m=+148.377585781" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.051996 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7rkrg" podStartSLOduration=125.051979125 podStartE2EDuration="2m5.051979125s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.050025913 +0000 UTC m=+148.393135337" watchObservedRunningTime="2025-11-25 14:27:00.051979125 +0000 UTC m=+148.395088549" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.065196 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zph9v" podStartSLOduration=126.065183227 podStartE2EDuration="2m6.065183227s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.063878313 +0000 UTC m=+148.406987737" watchObservedRunningTime="2025-11-25 14:27:00.065183227 +0000 UTC m=+148.408292651" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.105034 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4dt7c" podStartSLOduration=125.105018681 podStartE2EDuration="2m5.105018681s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.083356243 +0000 UTC m=+148.426465667" watchObservedRunningTime="2025-11-25 14:27:00.105018681 +0000 UTC m=+148.448128105" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.105636 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64jzs" podStartSLOduration=126.105622208 podStartE2EDuration="2m6.105622208s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.103671906 +0000 UTC m=+148.446781330" watchObservedRunningTime="2025-11-25 14:27:00.105622208 +0000 UTC m=+148.448731632" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.114475 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.114797 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.614782652 +0000 UTC m=+148.957892076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.137391 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" podStartSLOduration=125.137377116 podStartE2EDuration="2m5.137377116s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.125845228 +0000 UTC m=+148.468954652" watchObservedRunningTime="2025-11-25 14:27:00.137377116 +0000 UTC m=+148.480486540" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.139669 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gbdnn" podStartSLOduration=8.139659357 podStartE2EDuration="8.139659357s" podCreationTimestamp="2025-11-25 14:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.136917663 +0000 UTC m=+148.480027087" watchObservedRunningTime="2025-11-25 14:27:00.139659357 +0000 UTC m=+148.482768781" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.151961 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zgvlv" podStartSLOduration=126.151946834 podStartE2EDuration="2m6.151946834s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.149931741 +0000 UTC m=+148.493041165" watchObservedRunningTime="2025-11-25 14:27:00.151946834 +0000 UTC m=+148.495056258" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.166763 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j22d5" podStartSLOduration=126.16674861 podStartE2EDuration="2m6.16674861s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.165890387 +0000 UTC m=+148.508999811" watchObservedRunningTime="2025-11-25 14:27:00.16674861 +0000 UTC m=+148.509858034" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.173702 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.173752 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.194426 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" podStartSLOduration=125.194410368 podStartE2EDuration="2m5.194410368s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:00.192079846 +0000 UTC m=+148.535189270" watchObservedRunningTime="2025-11-25 14:27:00.194410368 +0000 UTC m=+148.537519792" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.216287 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.216738 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.716719194 +0000 UTC m=+149.059828668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.317534 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.317664 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.81764669 +0000 UTC m=+149.160756114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.317768 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.318038 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.81803006 +0000 UTC m=+149.161139484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.419218 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.419544 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:00.919529861 +0000 UTC m=+149.262639285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.521111 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.521564 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.021548515 +0000 UTC m=+149.364657939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.622649 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.622868 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.122842201 +0000 UTC m=+149.465951625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.623033 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.623351 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.123336284 +0000 UTC m=+149.466445718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.723822 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.723987 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.223962442 +0000 UTC m=+149.567071866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.724302 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.724691 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.224678861 +0000 UTC m=+149.567788335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.825387 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.825613 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.325581296 +0000 UTC m=+149.668690720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.825958 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.826264 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.326252023 +0000 UTC m=+149.669361447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.926987 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:00 crc kubenswrapper[4796]: E1125 14:27:00.927433 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.427414035 +0000 UTC m=+149.770523459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.980047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" event={"ID":"8d0d05a9-8584-4102-8f50-6c0e36923a3e","Type":"ContainerStarted","Data":"bb1f5e8cc3f05c74807e3fd45c35bbaaa0a92a9125809d53fe55e51ec7b0095b"} Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.982125 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" event={"ID":"453a1a57-5017-420d-b2e5-2fef1a7721f5","Type":"ContainerStarted","Data":"fb0dd7fa516f15391e3fa2ecc3cc44c998b883f4cf86600889f0a557797d336f"} Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.983371 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-spphm" event={"ID":"6b36d62b-3186-4e2b-961e-0e3553f75036","Type":"ContainerStarted","Data":"967640f73bdb07fc10c4c65f45d9ce3669a4e22c3427a512ea92a1a8dcd5000b"} Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.983780 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-spphm" Nov 25 14:27:00 crc kubenswrapper[4796]: I1125 14:27:00.985002 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" event={"ID":"7b22dd74-4a14-454e-8b0d-9fdb57ce6653","Type":"ContainerStarted","Data":"aea34a67751979370c75b26393994203d2d2fff299c9e52a8c099bd236683288"} Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.005355 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qpbx9" podStartSLOduration=127.005336167 podStartE2EDuration="2m7.005336167s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:01.003116247 +0000 UTC m=+149.346225671" watchObservedRunningTime="2025-11-25 14:27:01.005336167 +0000 UTC m=+149.348445591" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.006818 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" event={"ID":"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5","Type":"ContainerStarted","Data":"5f4cc6fcb2a19dde18bbff532c49ff2f42974e44997bbd71d3175fee5a448620"} Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.016774 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" event={"ID":"91c0402b-e438-4e77-8a6d-2765d09030e0","Type":"ContainerStarted","Data":"5acee540e6ee2061d1e1c0e46bc917216eb0b713e1b35a148360dcee6ac54367"} Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.025605 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw" event={"ID":"4f72561b-ab6e-4eb5-b855-bbafd724ce5f","Type":"ContainerStarted","Data":"52f6db5d7041528bb244ca5f4f1d547a3ad0817b10ac149e5484cfc76dd5218e"} Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.026043 4796 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-b72mt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.026084 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" podUID="9ec4132b-350d-4ae8-9f11-38218dd0b07e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.026126 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-spphm" podStartSLOduration=9.026112141 podStartE2EDuration="9.026112141s" podCreationTimestamp="2025-11-25 14:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:01.022931046 +0000 UTC m=+149.366040470" watchObservedRunningTime="2025-11-25 14:27:01.026112141 +0000 UTC m=+149.369221565" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.026535 4796 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-wdh6v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.026582 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" podUID="f667ef84-04a1-4c76-95d7-75648124470f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.026676 4796 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jvr77 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.026698 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" podUID="a85003e7-763f-4480-af83-0a827574dc25" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.031450 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.033099 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.533087598 +0000 UTC m=+149.876197022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.050302 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" podStartSLOduration=127.050283466 podStartE2EDuration="2m7.050283466s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:01.050196545 +0000 UTC m=+149.393305989" watchObservedRunningTime="2025-11-25 14:27:01.050283466 +0000 UTC m=+149.393392890" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.069720 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" podStartSLOduration=126.069702536 podStartE2EDuration="2m6.069702536s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:01.068969246 +0000 UTC m=+149.412078670" watchObservedRunningTime="2025-11-25 14:27:01.069702536 +0000 UTC m=+149.412811960" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.083195 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rtsgw" podStartSLOduration=127.083180026 podStartE2EDuration="2m7.083180026s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:01.082278641 +0000 UTC m=+149.425388065" watchObservedRunningTime="2025-11-25 14:27:01.083180026 +0000 UTC m=+149.426289450" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.100485 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bzlvn" podStartSLOduration=127.100467558 podStartE2EDuration="2m7.100467558s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:01.100022166 +0000 UTC m=+149.443131590" watchObservedRunningTime="2025-11-25 14:27:01.100467558 +0000 UTC m=+149.443576982" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.115795 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" podStartSLOduration=126.115781006 podStartE2EDuration="2m6.115781006s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:01.115034886 +0000 UTC m=+149.458144310" watchObservedRunningTime="2025-11-25 14:27:01.115781006 +0000 UTC m=+149.458890430" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.132962 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.133346 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.633329265 +0000 UTC m=+149.976438689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.159959 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" podStartSLOduration=127.159945675 podStartE2EDuration="2m7.159945675s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:01.156676568 +0000 UTC m=+149.499786012" watchObservedRunningTime="2025-11-25 14:27:01.159945675 +0000 UTC m=+149.503055099" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.177620 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4g8pn" podStartSLOduration=126.177604357 podStartE2EDuration="2m6.177604357s" podCreationTimestamp="2025-11-25 14:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:01.175359457 +0000 UTC m=+149.518468881" watchObservedRunningTime="2025-11-25 14:27:01.177604357 +0000 UTC m=+149.520713781" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.180992 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:01 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:01 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:01 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.181044 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.236627 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.237008 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.736990914 +0000 UTC m=+150.080100338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.337957 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.338142 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.838113294 +0000 UTC m=+150.181222728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.338204 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.338287 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.338626 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.838614687 +0000 UTC m=+150.181724111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.342255 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.439898 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.440071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.440149 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.940103098 +0000 UTC m=+150.283212522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.440215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.440984 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.441081 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.441449 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:01.941432864 +0000 UTC m=+150.284542288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.446160 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.446261 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.450641 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.460940 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.461141 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.542611 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.542768 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.042729869 +0000 UTC m=+150.385839293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.543101 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.543469 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.043453428 +0000 UTC m=+150.386562852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.644207 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.644716 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.144696312 +0000 UTC m=+150.487805736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.736289 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.747229 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.747614 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.24759893 +0000 UTC m=+150.590708354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.850814 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.851181 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.351156786 +0000 UTC m=+150.694266210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: W1125 14:27:01.860931 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-34d80f6be12dee45a6c95ccda44a02cc16feb2ffed98e998d975317d46201623 WatchSource:0}: Error finding container 34d80f6be12dee45a6c95ccda44a02cc16feb2ffed98e998d975317d46201623: Status 404 returned error can't find the container with id 34d80f6be12dee45a6c95ccda44a02cc16feb2ffed98e998d975317d46201623 Nov 25 14:27:01 crc kubenswrapper[4796]: I1125 14:27:01.954160 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:01 crc kubenswrapper[4796]: E1125 14:27:01.954469 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.454458026 +0000 UTC m=+150.797567450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:01 crc kubenswrapper[4796]: W1125 14:27:01.962607 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-92431ab91188c673b0c71f62e822db8bb3dce7ff45a27a0afd069f988e59601e WatchSource:0}: Error finding container 92431ab91188c673b0c71f62e822db8bb3dce7ff45a27a0afd069f988e59601e: Status 404 returned error can't find the container with id 92431ab91188c673b0c71f62e822db8bb3dce7ff45a27a0afd069f988e59601e Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.041173 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"92431ab91188c673b0c71f62e822db8bb3dce7ff45a27a0afd069f988e59601e"} Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.050622 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"34d80f6be12dee45a6c95ccda44a02cc16feb2ffed98e998d975317d46201623"} Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.055207 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.055684 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.555661738 +0000 UTC m=+150.898771162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.160107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.161132 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.661119445 +0000 UTC m=+151.004228859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.179749 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:02 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:02 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:02 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.179803 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.261282 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.261407 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.761383482 +0000 UTC m=+151.104492906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.261866 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.262137 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.762128633 +0000 UTC m=+151.105238057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.362628 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.362961 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.862946455 +0000 UTC m=+151.206055879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: W1125 14:27:02.384308 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-26307d222ec8cf4e125d9030758080f511809d23c0725ec7077528e36b69c558 WatchSource:0}: Error finding container 26307d222ec8cf4e125d9030758080f511809d23c0725ec7077528e36b69c558: Status 404 returned error can't find the container with id 26307d222ec8cf4e125d9030758080f511809d23c0725ec7077528e36b69c558 Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.463891 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.464251 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:02.964225439 +0000 UTC m=+151.307334863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.564599 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.564746 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.064722664 +0000 UTC m=+151.407832088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.564845 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.565135 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.065128034 +0000 UTC m=+151.408237458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.666005 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.666176 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.166149782 +0000 UTC m=+151.509259206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.666260 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.666509 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.166498842 +0000 UTC m=+151.509608266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.767707 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.767888 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.267863479 +0000 UTC m=+151.610972893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.767975 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.768275 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.26826749 +0000 UTC m=+151.611376914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.868757 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.868927 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.368902638 +0000 UTC m=+151.712012062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.869103 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.869454 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.369442752 +0000 UTC m=+151.712552166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.970226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.970355 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.470330777 +0000 UTC m=+151.813440201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:02 crc kubenswrapper[4796]: I1125 14:27:02.970883 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:02 crc kubenswrapper[4796]: E1125 14:27:02.971246 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.47123298 +0000 UTC m=+151.814342404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.056587 4796 generic.go:334] "Generic (PLEG): container finished" podID="fab48abd-b847-4828-99f2-e9d7d3312e94" containerID="c7e1f60ff6e8f6f667659e5dc9896c30689bb50f7e829fc825e978d0e3736b5d" exitCode=0 Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.056662 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" event={"ID":"fab48abd-b847-4828-99f2-e9d7d3312e94","Type":"ContainerDied","Data":"c7e1f60ff6e8f6f667659e5dc9896c30689bb50f7e829fc825e978d0e3736b5d"} Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.057849 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5cdd932362d4bab1e5a406736e8002e661fac33c349cacfad31c76361d188409"} Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.059298 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"35b214e58eb83baf28ff533bd3d7636403d01c0949cd3637212c031a4b13cda0"} Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.059336 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"26307d222ec8cf4e125d9030758080f511809d23c0725ec7077528e36b69c558"} Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.059621 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.060447 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"950752641a3ffa848d9026968e92115e6c32bdaadd83bbb2ceea682f321b40b9"} Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.072516 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.072714 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.57268492 +0000 UTC m=+151.915794344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.072830 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.073297 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.573281126 +0000 UTC m=+151.916390550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.179229 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.181041 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.681022213 +0000 UTC m=+152.024131637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.190723 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:03 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:03 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:03 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.190774 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.280872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.281263 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.781251241 +0000 UTC m=+152.124360675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.381990 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.382167 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.882142005 +0000 UTC m=+152.225251429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.382389 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.382680 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.882669389 +0000 UTC m=+152.225778803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.457794 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dsq6m"] Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.459231 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.467594 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.483332 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.483843 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:03.98382815 +0000 UTC m=+152.326937564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.496220 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dsq6m"] Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.585161 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-utilities\") pod \"community-operators-dsq6m\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.585244 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.585268 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r74x9\" (UniqueName: \"kubernetes.io/projected/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-kube-api-access-r74x9\") pod \"community-operators-dsq6m\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.585338 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-catalog-content\") pod \"community-operators-dsq6m\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.585557 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.085541017 +0000 UTC m=+152.428650441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.643400 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bbltb"] Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.644586 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.648228 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.668352 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.674689 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbltb"] Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.687058 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.687162 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.187145291 +0000 UTC m=+152.530254715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.687358 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-catalog-content\") pod \"certified-operators-bbltb\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.687390 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-catalog-content\") pod \"community-operators-dsq6m\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.687420 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-utilities\") pod \"community-operators-dsq6m\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.687443 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.687461 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r74x9\" (UniqueName: \"kubernetes.io/projected/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-kube-api-access-r74x9\") pod \"community-operators-dsq6m\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.687500 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-utilities\") pod \"certified-operators-bbltb\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.687515 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h25xt\" (UniqueName: \"kubernetes.io/projected/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-kube-api-access-h25xt\") pod \"certified-operators-bbltb\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.687944 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.187929591 +0000 UTC m=+152.531039015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.688243 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-utilities\") pod \"community-operators-dsq6m\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.688483 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-catalog-content\") pod \"community-operators-dsq6m\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.720109 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r74x9\" (UniqueName: \"kubernetes.io/projected/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-kube-api-access-r74x9\") pod \"community-operators-dsq6m\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.743611 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k6xrl" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.746946 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.747886 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.752102 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.752229 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.753791 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.775829 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.787937 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.788173 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8aa1c4ac-9347-4234-8c46-6a522b18b859-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8aa1c4ac-9347-4234-8c46-6a522b18b859\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.788212 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-utilities\") pod \"certified-operators-bbltb\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.788229 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h25xt\" (UniqueName: \"kubernetes.io/projected/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-kube-api-access-h25xt\") pod \"certified-operators-bbltb\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.788271 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-catalog-content\") pod \"certified-operators-bbltb\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.788304 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8aa1c4ac-9347-4234-8c46-6a522b18b859-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8aa1c4ac-9347-4234-8c46-6a522b18b859\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.789061 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.289045692 +0000 UTC m=+152.632155116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.790096 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-utilities\") pod \"certified-operators-bbltb\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.790105 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-catalog-content\") pod \"certified-operators-bbltb\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.829485 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h25xt\" (UniqueName: \"kubernetes.io/projected/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-kube-api-access-h25xt\") pod \"certified-operators-bbltb\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.839653 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7hvt"] Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.840835 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.854161 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7hvt"] Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.879111 4796 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.888970 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-catalog-content\") pod \"community-operators-q7hvt\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.889009 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8aa1c4ac-9347-4234-8c46-6a522b18b859-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8aa1c4ac-9347-4234-8c46-6a522b18b859\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.889024 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ns4q\" (UniqueName: \"kubernetes.io/projected/bb9121e4-c300-4964-9021-5fe2ea80802c-kube-api-access-2ns4q\") pod \"community-operators-q7hvt\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.889138 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8aa1c4ac-9347-4234-8c46-6a522b18b859-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8aa1c4ac-9347-4234-8c46-6a522b18b859\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.889195 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.889227 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-utilities\") pod \"community-operators-q7hvt\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.889322 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8aa1c4ac-9347-4234-8c46-6a522b18b859-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8aa1c4ac-9347-4234-8c46-6a522b18b859\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.889734 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.389723931 +0000 UTC m=+152.732833355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.914363 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8aa1c4ac-9347-4234-8c46-6a522b18b859-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8aa1c4ac-9347-4234-8c46-6a522b18b859\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.990260 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.990820 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-utilities\") pod \"community-operators-q7hvt\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.990864 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-catalog-content\") pod \"community-operators-q7hvt\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.990886 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ns4q\" (UniqueName: \"kubernetes.io/projected/bb9121e4-c300-4964-9021-5fe2ea80802c-kube-api-access-2ns4q\") pod \"community-operators-q7hvt\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:03 crc kubenswrapper[4796]: E1125 14:27:03.991199 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.491184281 +0000 UTC m=+152.834293705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.991509 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-utilities\") pod \"community-operators-q7hvt\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:03 crc kubenswrapper[4796]: I1125 14:27:03.991717 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-catalog-content\") pod \"community-operators-q7hvt\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.018611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.028281 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ns4q\" (UniqueName: \"kubernetes.io/projected/bb9121e4-c300-4964-9021-5fe2ea80802c-kube-api-access-2ns4q\") pod \"community-operators-q7hvt\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.040960 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4wnxb"] Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.045228 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.049945 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wnxb"] Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.063026 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.080869 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" event={"ID":"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5","Type":"ContainerStarted","Data":"d0660da5fa24f1c844a0f3b787ed1a2f5bdc7f4de530dedc3d9deb28ebd45d5b"} Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.080903 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" event={"ID":"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5","Type":"ContainerStarted","Data":"0567a35f493896bcd89688a75f43ee72dd01536b087f20c067b33f4b97671f14"} Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.080913 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" event={"ID":"a9d6e924-f8c3-4f0a-92f3-942e822e5fc5","Type":"ContainerStarted","Data":"9d1f20708081b0da7d5a87368a230d1eae13d8cc300d6522777e3b20e8559252"} Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.090678 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dsq6m"] Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.099336 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-utilities\") pod \"certified-operators-4wnxb\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.099379 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tvvv\" (UniqueName: \"kubernetes.io/projected/c7b5a9d6-081c-4217-8498-19ab1decb386-kube-api-access-6tvvv\") pod \"certified-operators-4wnxb\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.099407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.099473 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-catalog-content\") pod \"certified-operators-4wnxb\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: E1125 14:27:04.099771 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.5997588 +0000 UTC m=+152.942868224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:04 crc kubenswrapper[4796]: W1125 14:27:04.112757 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c3dfd30_55e6_44cf_9657_cff0cc0d2499.slice/crio-f5b624de4071471761f1dbb64ad3763bbf6304e848aceed503f55fd5e6d28ec6 WatchSource:0}: Error finding container f5b624de4071471761f1dbb64ad3763bbf6304e848aceed503f55fd5e6d28ec6: Status 404 returned error can't find the container with id f5b624de4071471761f1dbb64ad3763bbf6304e848aceed503f55fd5e6d28ec6 Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.114597 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-w7tqs" podStartSLOduration=12.114583587 podStartE2EDuration="12.114583587s" podCreationTimestamp="2025-11-25 14:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:04.113134277 +0000 UTC m=+152.456243701" watchObservedRunningTime="2025-11-25 14:27:04.114583587 +0000 UTC m=+152.457693011" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.169453 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.181008 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:04 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:04 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:04 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.181062 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.201657 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:04 crc kubenswrapper[4796]: E1125 14:27:04.201853 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.701831107 +0000 UTC m=+153.044940531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.201894 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tvvv\" (UniqueName: \"kubernetes.io/projected/c7b5a9d6-081c-4217-8498-19ab1decb386-kube-api-access-6tvvv\") pod \"certified-operators-4wnxb\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.201945 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.202129 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-catalog-content\") pod \"certified-operators-4wnxb\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.202187 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-utilities\") pod \"certified-operators-4wnxb\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.202889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-utilities\") pod \"certified-operators-4wnxb\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: E1125 14:27:04.203805 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.703795389 +0000 UTC m=+153.046904813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.207004 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-catalog-content\") pod \"certified-operators-4wnxb\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.228073 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tvvv\" (UniqueName: \"kubernetes.io/projected/c7b5a9d6-081c-4217-8498-19ab1decb386-kube-api-access-6tvvv\") pod \"certified-operators-4wnxb\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.304474 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:04 crc kubenswrapper[4796]: E1125 14:27:04.304852 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.804838138 +0000 UTC m=+153.147947562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.364743 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.406636 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:04 crc kubenswrapper[4796]: E1125 14:27:04.406923 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:04.906912133 +0000 UTC m=+153.250021557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.418694 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbltb"] Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.511017 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:04 crc kubenswrapper[4796]: E1125 14:27:04.511205 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:27:05.011178668 +0000 UTC m=+153.354288082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.511374 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:04 crc kubenswrapper[4796]: E1125 14:27:04.511757 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:27:05.011748744 +0000 UTC m=+153.354858168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-95xvf" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.518950 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.534754 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.538197 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-tsl5t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.538236 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tsl5t" podUID="c239761f-ade6-47eb-8fa5-f5178577ccb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.538534 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-tsl5t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.538552 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tsl5t" podUID="c239761f-ade6-47eb-8fa5-f5178577ccb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 25 14:27:04 crc kubenswrapper[4796]: W1125 14:27:04.553585 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8aa1c4ac_9347_4234_8c46_6a522b18b859.slice/crio-e3adca9e444f68c7dca87f346cae33266da3e12bd066fa88f73921ca951e743a WatchSource:0}: Error finding container e3adca9e444f68c7dca87f346cae33266da3e12bd066fa88f73921ca951e743a: Status 404 returned error can't find the container with id e3adca9e444f68c7dca87f346cae33266da3e12bd066fa88f73921ca951e743a Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.554151 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.571124 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.571992 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.596384 4796 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T14:27:03.879134598Z","Handler":null,"Name":""} Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.605161 4796 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.605199 4796 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.612827 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.621715 4796 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vzn94 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]log ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]etcd ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/max-in-flight-filter ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 25 14:27:04 crc kubenswrapper[4796]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 25 14:27:04 crc kubenswrapper[4796]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/project.openshift.io-projectcache ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-startinformers ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 25 14:27:04 crc kubenswrapper[4796]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 14:27:04 crc kubenswrapper[4796]: livez check failed Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.621776 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" podUID="453a1a57-5017-420d-b2e5-2fef1a7721f5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.621909 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.679884 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.679926 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.682875 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.692869 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.714886 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.736514 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.737699 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.737891 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.745420 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7hvt"] Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.807975 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.811910 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.815619 4796 patch_prober.go:28] interesting pod/console-f9d7485db-x57qm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.815671 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x57qm" podUID="fa025925-c61e-49ae-ba50-79f4a401a20f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.816081 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab48abd-b847-4828-99f2-e9d7d3312e94-secret-volume\") pod \"fab48abd-b847-4828-99f2-e9d7d3312e94\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.816150 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab48abd-b847-4828-99f2-e9d7d3312e94-config-volume\") pod \"fab48abd-b847-4828-99f2-e9d7d3312e94\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.820162 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab48abd-b847-4828-99f2-e9d7d3312e94-config-volume" (OuterVolumeSpecName: "config-volume") pod "fab48abd-b847-4828-99f2-e9d7d3312e94" (UID: "fab48abd-b847-4828-99f2-e9d7d3312e94"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.820167 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcw5l\" (UniqueName: \"kubernetes.io/projected/fab48abd-b847-4828-99f2-e9d7d3312e94-kube-api-access-dcw5l\") pod \"fab48abd-b847-4828-99f2-e9d7d3312e94\" (UID: \"fab48abd-b847-4828-99f2-e9d7d3312e94\") " Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.820764 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab48abd-b847-4828-99f2-e9d7d3312e94-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.844743 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab48abd-b847-4828-99f2-e9d7d3312e94-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fab48abd-b847-4828-99f2-e9d7d3312e94" (UID: "fab48abd-b847-4828-99f2-e9d7d3312e94"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.850348 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-w9vpf" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.851305 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab48abd-b847-4828-99f2-e9d7d3312e94-kube-api-access-dcw5l" (OuterVolumeSpecName: "kube-api-access-dcw5l") pod "fab48abd-b847-4828-99f2-e9d7d3312e94" (UID: "fab48abd-b847-4828-99f2-e9d7d3312e94"). InnerVolumeSpecName "kube-api-access-dcw5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.921508 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcw5l\" (UniqueName: \"kubernetes.io/projected/fab48abd-b847-4828-99f2-e9d7d3312e94-kube-api-access-dcw5l\") on node \"crc\" DevicePath \"\"" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.921627 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab48abd-b847-4828-99f2-e9d7d3312e94-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:27:04 crc kubenswrapper[4796]: I1125 14:27:04.926064 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-95xvf\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.035582 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wnxb"] Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.049748 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.090007 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" event={"ID":"fab48abd-b847-4828-99f2-e9d7d3312e94","Type":"ContainerDied","Data":"c6f93103c8a2edb3fd5a71e11ade6c4c55a83d39f2958c995be15870ca6b1930"} Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.090194 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f93103c8a2edb3fd5a71e11ade6c4c55a83d39f2958c995be15870ca6b1930" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.090299 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.093984 4796 generic.go:334] "Generic (PLEG): container finished" podID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerID="9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150" exitCode=0 Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.094055 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsq6m" event={"ID":"8c3dfd30-55e6-44cf-9657-cff0cc0d2499","Type":"ContainerDied","Data":"9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150"} Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.094086 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsq6m" event={"ID":"8c3dfd30-55e6-44cf-9657-cff0cc0d2499","Type":"ContainerStarted","Data":"f5b624de4071471761f1dbb64ad3763bbf6304e848aceed503f55fd5e6d28ec6"} Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.096121 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.097745 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wnxb" event={"ID":"c7b5a9d6-081c-4217-8498-19ab1decb386","Type":"ContainerStarted","Data":"4ee1d5e6f6200caef9a8e76edd3d7fdbb90d14b87a793bd16ac14a266ebad2fc"} Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.099010 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8aa1c4ac-9347-4234-8c46-6a522b18b859","Type":"ContainerStarted","Data":"e3adca9e444f68c7dca87f346cae33266da3e12bd066fa88f73921ca951e743a"} Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.109755 4796 generic.go:334] "Generic (PLEG): container finished" podID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerID="3e1a69fee3e741f7b55cc2c6ad3e12d6c9cb3f5653ef75b9f8fa276d3776b903" exitCode=0 Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.110638 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbltb" event={"ID":"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c","Type":"ContainerDied","Data":"3e1a69fee3e741f7b55cc2c6ad3e12d6c9cb3f5653ef75b9f8fa276d3776b903"} Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.112880 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbltb" event={"ID":"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c","Type":"ContainerStarted","Data":"2131d6ad63dd42fecadf122001e6521cf46f473b0bc5aeba591cc6570009a8d8"} Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.123943 4796 generic.go:334] "Generic (PLEG): container finished" podID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerID="7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af" exitCode=0 Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.124655 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hvt" event={"ID":"bb9121e4-c300-4964-9021-5fe2ea80802c","Type":"ContainerDied","Data":"7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af"} Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.124712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hvt" event={"ID":"bb9121e4-c300-4964-9021-5fe2ea80802c","Type":"ContainerStarted","Data":"6cae5388fb3d9058f724ee076b94602a3f4bb707418082f9f0bfb77f376887d7"} Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.139730 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-l7rnd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.180863 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.190496 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:05 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:05 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:05 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.190545 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.316539 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.429325 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxlgd"] Nov 25 14:27:05 crc kubenswrapper[4796]: E1125 14:27:05.429545 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab48abd-b847-4828-99f2-e9d7d3312e94" containerName="collect-profiles" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.429556 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab48abd-b847-4828-99f2-e9d7d3312e94" containerName="collect-profiles" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.429669 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab48abd-b847-4828-99f2-e9d7d3312e94" containerName="collect-profiles" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.430342 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.431784 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.446484 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxlgd"] Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.537233 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-utilities\") pod \"redhat-marketplace-pxlgd\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.537287 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbnfx\" (UniqueName: \"kubernetes.io/projected/c1c15ec0-52c3-4420-9ccf-a50630662516-kube-api-access-tbnfx\") pod \"redhat-marketplace-pxlgd\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.537398 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-catalog-content\") pod \"redhat-marketplace-pxlgd\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.551071 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-95xvf"] Nov 25 14:27:05 crc kubenswrapper[4796]: W1125 14:27:05.558829 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd07e3b8b_d9ae_40f6_901c_1be058824059.slice/crio-5702379ad0b40fb4da498e3daada175324912f7dbe713f39894a7b94b89efe81 WatchSource:0}: Error finding container 5702379ad0b40fb4da498e3daada175324912f7dbe713f39894a7b94b89efe81: Status 404 returned error can't find the container with id 5702379ad0b40fb4da498e3daada175324912f7dbe713f39894a7b94b89efe81 Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.561079 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.573019 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-b72mt" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.605588 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jvr77" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.638525 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-utilities\") pod \"redhat-marketplace-pxlgd\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.638581 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbnfx\" (UniqueName: \"kubernetes.io/projected/c1c15ec0-52c3-4420-9ccf-a50630662516-kube-api-access-tbnfx\") pod \"redhat-marketplace-pxlgd\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.638600 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-catalog-content\") pod \"redhat-marketplace-pxlgd\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.639661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-utilities\") pod \"redhat-marketplace-pxlgd\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.640078 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-catalog-content\") pod \"redhat-marketplace-pxlgd\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.667015 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbnfx\" (UniqueName: \"kubernetes.io/projected/c1c15ec0-52c3-4420-9ccf-a50630662516-kube-api-access-tbnfx\") pod \"redhat-marketplace-pxlgd\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.742213 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.844794 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wdh6v" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.848265 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vrr"] Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.849769 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.883364 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vrr"] Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.944380 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p52ds\" (UniqueName: \"kubernetes.io/projected/11354fb2-68d2-4e9c-9072-98e9866eb162-kube-api-access-p52ds\") pod \"redhat-marketplace-r2vrr\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.944426 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-utilities\") pod \"redhat-marketplace-r2vrr\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.944476 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-catalog-content\") pod \"redhat-marketplace-r2vrr\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:05 crc kubenswrapper[4796]: I1125 14:27:05.998729 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxlgd"] Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.045346 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-catalog-content\") pod \"redhat-marketplace-r2vrr\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.045930 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p52ds\" (UniqueName: \"kubernetes.io/projected/11354fb2-68d2-4e9c-9072-98e9866eb162-kube-api-access-p52ds\") pod \"redhat-marketplace-r2vrr\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.045962 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-utilities\") pod \"redhat-marketplace-r2vrr\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.045987 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-catalog-content\") pod \"redhat-marketplace-r2vrr\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.048198 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-utilities\") pod \"redhat-marketplace-r2vrr\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.061929 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p52ds\" (UniqueName: \"kubernetes.io/projected/11354fb2-68d2-4e9c-9072-98e9866eb162-kube-api-access-p52ds\") pod \"redhat-marketplace-r2vrr\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.143498 4796 generic.go:334] "Generic (PLEG): container finished" podID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerID="2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0" exitCode=0 Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.143926 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wnxb" event={"ID":"c7b5a9d6-081c-4217-8498-19ab1decb386","Type":"ContainerDied","Data":"2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0"} Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.150887 4796 generic.go:334] "Generic (PLEG): container finished" podID="8aa1c4ac-9347-4234-8c46-6a522b18b859" containerID="824b7a2ad97bbe81e36421bdcbd4673ba33927b756159b466b150e08763ee3f3" exitCode=0 Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.150965 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8aa1c4ac-9347-4234-8c46-6a522b18b859","Type":"ContainerDied","Data":"824b7a2ad97bbe81e36421bdcbd4673ba33927b756159b466b150e08763ee3f3"} Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.152843 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxlgd" event={"ID":"c1c15ec0-52c3-4420-9ccf-a50630662516","Type":"ContainerStarted","Data":"3be93ef79eed084357855fe242079c5ae589a4c3fe8dd9c5f82f8c70d070aa91"} Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.155317 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" event={"ID":"d07e3b8b-d9ae-40f6-901c-1be058824059","Type":"ContainerStarted","Data":"9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8"} Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.155349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" event={"ID":"d07e3b8b-d9ae-40f6-901c-1be058824059","Type":"ContainerStarted","Data":"5702379ad0b40fb4da498e3daada175324912f7dbe713f39894a7b94b89efe81"} Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.155366 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.176843 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:06 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:06 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:06 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.177073 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.178144 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" podStartSLOduration=132.178132039 podStartE2EDuration="2m12.178132039s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:06.174372268 +0000 UTC m=+154.517481692" watchObservedRunningTime="2025-11-25 14:27:06.178132039 +0000 UTC m=+154.521241463" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.193670 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.395431 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vrr"] Nov 25 14:27:06 crc kubenswrapper[4796]: W1125 14:27:06.406734 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11354fb2_68d2_4e9c_9072_98e9866eb162.slice/crio-4c53b6f1024589aadc8c6ada33fbe9cee79ad965d4680147892ca213299dabbd WatchSource:0}: Error finding container 4c53b6f1024589aadc8c6ada33fbe9cee79ad965d4680147892ca213299dabbd: Status 404 returned error can't find the container with id 4c53b6f1024589aadc8c6ada33fbe9cee79ad965d4680147892ca213299dabbd Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.421080 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.629142 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.634294 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.636803 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.639309 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.639540 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.661074 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qqcls"] Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.663965 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.666175 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.668425 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqcls"] Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.754463 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-catalog-content\") pod \"redhat-operators-qqcls\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.754515 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kjmb\" (UniqueName: \"kubernetes.io/projected/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-kube-api-access-8kjmb\") pod \"redhat-operators-qqcls\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.754554 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-utilities\") pod \"redhat-operators-qqcls\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.754582 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad795e35-d372-4564-adbf-04646d54d05d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ad795e35-d372-4564-adbf-04646d54d05d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.754888 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad795e35-d372-4564-adbf-04646d54d05d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ad795e35-d372-4564-adbf-04646d54d05d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.856744 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad795e35-d372-4564-adbf-04646d54d05d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ad795e35-d372-4564-adbf-04646d54d05d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.856814 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-catalog-content\") pod \"redhat-operators-qqcls\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.856848 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kjmb\" (UniqueName: \"kubernetes.io/projected/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-kube-api-access-8kjmb\") pod \"redhat-operators-qqcls\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.856895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-utilities\") pod \"redhat-operators-qqcls\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.856917 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad795e35-d372-4564-adbf-04646d54d05d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ad795e35-d372-4564-adbf-04646d54d05d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.856995 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad795e35-d372-4564-adbf-04646d54d05d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ad795e35-d372-4564-adbf-04646d54d05d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.857428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-catalog-content\") pod \"redhat-operators-qqcls\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.857496 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-utilities\") pod \"redhat-operators-qqcls\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.881194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad795e35-d372-4564-adbf-04646d54d05d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ad795e35-d372-4564-adbf-04646d54d05d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.881276 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kjmb\" (UniqueName: \"kubernetes.io/projected/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-kube-api-access-8kjmb\") pod \"redhat-operators-qqcls\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.970363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:06 crc kubenswrapper[4796]: I1125 14:27:06.989998 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.030173 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9f5sn"] Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.031490 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.045089 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9f5sn"] Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.161202 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-catalog-content\") pod \"redhat-operators-9f5sn\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.161241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-utilities\") pod \"redhat-operators-9f5sn\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.161330 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhgq9\" (UniqueName: \"kubernetes.io/projected/40cd1b96-cac2-46f0-9b1d-4934d6f13087-kube-api-access-rhgq9\") pod \"redhat-operators-9f5sn\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.164343 4796 generic.go:334] "Generic (PLEG): container finished" podID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerID="ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82" exitCode=0 Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.164402 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxlgd" event={"ID":"c1c15ec0-52c3-4420-9ccf-a50630662516","Type":"ContainerDied","Data":"ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82"} Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.175294 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:07 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:07 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:07 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.175340 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.175698 4796 generic.go:334] "Generic (PLEG): container finished" podID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerID="d697a652cef86b40e775e43969a5010165c5251e4ee07b3ce48572f5b90ecf5f" exitCode=0 Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.176075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vrr" event={"ID":"11354fb2-68d2-4e9c-9072-98e9866eb162","Type":"ContainerDied","Data":"d697a652cef86b40e775e43969a5010165c5251e4ee07b3ce48572f5b90ecf5f"} Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.176106 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vrr" event={"ID":"11354fb2-68d2-4e9c-9072-98e9866eb162","Type":"ContainerStarted","Data":"4c53b6f1024589aadc8c6ada33fbe9cee79ad965d4680147892ca213299dabbd"} Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.250789 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.262780 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhgq9\" (UniqueName: \"kubernetes.io/projected/40cd1b96-cac2-46f0-9b1d-4934d6f13087-kube-api-access-rhgq9\") pod \"redhat-operators-9f5sn\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.262853 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-catalog-content\") pod \"redhat-operators-9f5sn\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.262880 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-utilities\") pod \"redhat-operators-9f5sn\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.264019 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-utilities\") pod \"redhat-operators-9f5sn\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.264442 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-catalog-content\") pod \"redhat-operators-9f5sn\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.282016 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhgq9\" (UniqueName: \"kubernetes.io/projected/40cd1b96-cac2-46f0-9b1d-4934d6f13087-kube-api-access-rhgq9\") pod \"redhat-operators-9f5sn\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.393033 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.500732 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqcls"] Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.560071 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.666651 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8aa1c4ac-9347-4234-8c46-6a522b18b859-kubelet-dir\") pod \"8aa1c4ac-9347-4234-8c46-6a522b18b859\" (UID: \"8aa1c4ac-9347-4234-8c46-6a522b18b859\") " Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.666840 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8aa1c4ac-9347-4234-8c46-6a522b18b859-kube-api-access\") pod \"8aa1c4ac-9347-4234-8c46-6a522b18b859\" (UID: \"8aa1c4ac-9347-4234-8c46-6a522b18b859\") " Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.666961 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aa1c4ac-9347-4234-8c46-6a522b18b859-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8aa1c4ac-9347-4234-8c46-6a522b18b859" (UID: "8aa1c4ac-9347-4234-8c46-6a522b18b859"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.667108 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8aa1c4ac-9347-4234-8c46-6a522b18b859-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.674441 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aa1c4ac-9347-4234-8c46-6a522b18b859-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8aa1c4ac-9347-4234-8c46-6a522b18b859" (UID: "8aa1c4ac-9347-4234-8c46-6a522b18b859"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.770309 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8aa1c4ac-9347-4234-8c46-6a522b18b859-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:27:07 crc kubenswrapper[4796]: I1125 14:27:07.776065 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9f5sn"] Nov 25 14:27:07 crc kubenswrapper[4796]: W1125 14:27:07.784623 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40cd1b96_cac2_46f0_9b1d_4934d6f13087.slice/crio-a331d34a77374d250e253d7499c1d6328e54998d5af45a7574ebce1f1025e5e5 WatchSource:0}: Error finding container a331d34a77374d250e253d7499c1d6328e54998d5af45a7574ebce1f1025e5e5: Status 404 returned error can't find the container with id a331d34a77374d250e253d7499c1d6328e54998d5af45a7574ebce1f1025e5e5 Nov 25 14:27:08 crc kubenswrapper[4796]: I1125 14:27:08.175416 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:08 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:08 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:08 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:08 crc kubenswrapper[4796]: I1125 14:27:08.175877 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:08 crc kubenswrapper[4796]: I1125 14:27:08.187424 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8aa1c4ac-9347-4234-8c46-6a522b18b859","Type":"ContainerDied","Data":"e3adca9e444f68c7dca87f346cae33266da3e12bd066fa88f73921ca951e743a"} Nov 25 14:27:08 crc kubenswrapper[4796]: I1125 14:27:08.187479 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:27:08 crc kubenswrapper[4796]: I1125 14:27:08.187493 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3adca9e444f68c7dca87f346cae33266da3e12bd066fa88f73921ca951e743a" Nov 25 14:27:08 crc kubenswrapper[4796]: I1125 14:27:08.189614 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ad795e35-d372-4564-adbf-04646d54d05d","Type":"ContainerStarted","Data":"1dc383783777a786737cf49f3e134474ad32194464b54b99737f4efddade1dc3"} Nov 25 14:27:08 crc kubenswrapper[4796]: I1125 14:27:08.191137 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f5sn" event={"ID":"40cd1b96-cac2-46f0-9b1d-4934d6f13087","Type":"ContainerStarted","Data":"a331d34a77374d250e253d7499c1d6328e54998d5af45a7574ebce1f1025e5e5"} Nov 25 14:27:08 crc kubenswrapper[4796]: I1125 14:27:08.192274 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcls" event={"ID":"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3","Type":"ContainerStarted","Data":"9e745b33f5ce7ab23a402018f5cd4f8a024ef24d429370bba78064bed44b1937"} Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.175110 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:09 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:09 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:09 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.175161 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.202299 4796 generic.go:334] "Generic (PLEG): container finished" podID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerID="0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4" exitCode=0 Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.202344 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcls" event={"ID":"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3","Type":"ContainerDied","Data":"0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4"} Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.204601 4796 generic.go:334] "Generic (PLEG): container finished" podID="ad795e35-d372-4564-adbf-04646d54d05d" containerID="0a56b0974aa11ce90917ecdc8102442042f683e696ef9f897b360ebed9a78e38" exitCode=0 Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.204657 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ad795e35-d372-4564-adbf-04646d54d05d","Type":"ContainerDied","Data":"0a56b0974aa11ce90917ecdc8102442042f683e696ef9f897b360ebed9a78e38"} Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.207036 4796 generic.go:334] "Generic (PLEG): container finished" podID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerID="5c21340a9282e03f3575e59e52118f318627c68ba479107b590cec22bd4a2999" exitCode=0 Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.207075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f5sn" event={"ID":"40cd1b96-cac2-46f0-9b1d-4934d6f13087","Type":"ContainerDied","Data":"5c21340a9282e03f3575e59e52118f318627c68ba479107b590cec22bd4a2999"} Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.576513 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:27:09 crc kubenswrapper[4796]: I1125 14:27:09.581008 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vzn94" Nov 25 14:27:10 crc kubenswrapper[4796]: I1125 14:27:10.175435 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:10 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:10 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:10 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:10 crc kubenswrapper[4796]: I1125 14:27:10.175488 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:10 crc kubenswrapper[4796]: I1125 14:27:10.598485 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-spphm" Nov 25 14:27:11 crc kubenswrapper[4796]: I1125 14:27:11.174740 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:11 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:11 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:11 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:11 crc kubenswrapper[4796]: I1125 14:27:11.174803 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:12 crc kubenswrapper[4796]: I1125 14:27:12.176514 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:12 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:12 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:12 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:12 crc kubenswrapper[4796]: I1125 14:27:12.177013 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:13 crc kubenswrapper[4796]: I1125 14:27:13.175165 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:13 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:13 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:13 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:13 crc kubenswrapper[4796]: I1125 14:27:13.175234 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:14 crc kubenswrapper[4796]: I1125 14:27:14.181502 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:14 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:14 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:14 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:14 crc kubenswrapper[4796]: I1125 14:27:14.181824 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:14 crc kubenswrapper[4796]: I1125 14:27:14.537345 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-tsl5t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 25 14:27:14 crc kubenswrapper[4796]: I1125 14:27:14.537414 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tsl5t" podUID="c239761f-ade6-47eb-8fa5-f5178577ccb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 25 14:27:14 crc kubenswrapper[4796]: I1125 14:27:14.538265 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-tsl5t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 25 14:27:14 crc kubenswrapper[4796]: I1125 14:27:14.538331 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tsl5t" podUID="c239761f-ade6-47eb-8fa5-f5178577ccb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 25 14:27:14 crc kubenswrapper[4796]: I1125 14:27:14.808282 4796 patch_prober.go:28] interesting pod/console-f9d7485db-x57qm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 25 14:27:14 crc kubenswrapper[4796]: I1125 14:27:14.808376 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x57qm" podUID="fa025925-c61e-49ae-ba50-79f4a401a20f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 25 14:27:15 crc kubenswrapper[4796]: I1125 14:27:15.175062 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:15 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Nov 25 14:27:15 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:15 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:15 crc kubenswrapper[4796]: I1125 14:27:15.175197 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:16 crc kubenswrapper[4796]: I1125 14:27:16.177195 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:27:16 crc kubenswrapper[4796]: [+]has-synced ok Nov 25 14:27:16 crc kubenswrapper[4796]: [+]process-running ok Nov 25 14:27:16 crc kubenswrapper[4796]: healthz check failed Nov 25 14:27:16 crc kubenswrapper[4796]: I1125 14:27:16.177795 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:27:17 crc kubenswrapper[4796]: I1125 14:27:17.135257 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:27:17 crc kubenswrapper[4796]: I1125 14:27:17.144626 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07d588f-1940-4a4b-a4a9-94451e43ec8d-metrics-certs\") pod \"network-metrics-daemon-n4f9r\" (UID: \"a07d588f-1940-4a4b-a4a9-94451e43ec8d\") " pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:27:17 crc kubenswrapper[4796]: I1125 14:27:17.177083 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:27:17 crc kubenswrapper[4796]: I1125 14:27:17.181539 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-c6rl5" Nov 25 14:27:17 crc kubenswrapper[4796]: I1125 14:27:17.225158 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n4f9r" Nov 25 14:27:19 crc kubenswrapper[4796]: I1125 14:27:19.514630 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:27:19 crc kubenswrapper[4796]: I1125 14:27:19.514695 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:27:23 crc kubenswrapper[4796]: I1125 14:27:23.577793 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:23 crc kubenswrapper[4796]: I1125 14:27:23.626784 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad795e35-d372-4564-adbf-04646d54d05d-kube-api-access\") pod \"ad795e35-d372-4564-adbf-04646d54d05d\" (UID: \"ad795e35-d372-4564-adbf-04646d54d05d\") " Nov 25 14:27:23 crc kubenswrapper[4796]: I1125 14:27:23.626886 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad795e35-d372-4564-adbf-04646d54d05d-kubelet-dir\") pod \"ad795e35-d372-4564-adbf-04646d54d05d\" (UID: \"ad795e35-d372-4564-adbf-04646d54d05d\") " Nov 25 14:27:23 crc kubenswrapper[4796]: I1125 14:27:23.627370 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad795e35-d372-4564-adbf-04646d54d05d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ad795e35-d372-4564-adbf-04646d54d05d" (UID: "ad795e35-d372-4564-adbf-04646d54d05d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:27:23 crc kubenswrapper[4796]: I1125 14:27:23.639706 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad795e35-d372-4564-adbf-04646d54d05d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ad795e35-d372-4564-adbf-04646d54d05d" (UID: "ad795e35-d372-4564-adbf-04646d54d05d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:27:23 crc kubenswrapper[4796]: I1125 14:27:23.728206 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad795e35-d372-4564-adbf-04646d54d05d-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:27:23 crc kubenswrapper[4796]: I1125 14:27:23.728240 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad795e35-d372-4564-adbf-04646d54d05d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:27:24 crc kubenswrapper[4796]: I1125 14:27:24.302236 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ad795e35-d372-4564-adbf-04646d54d05d","Type":"ContainerDied","Data":"1dc383783777a786737cf49f3e134474ad32194464b54b99737f4efddade1dc3"} Nov 25 14:27:24 crc kubenswrapper[4796]: I1125 14:27:24.302519 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dc383783777a786737cf49f3e134474ad32194464b54b99737f4efddade1dc3" Nov 25 14:27:24 crc kubenswrapper[4796]: I1125 14:27:24.302281 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:27:24 crc kubenswrapper[4796]: I1125 14:27:24.557525 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tsl5t" Nov 25 14:27:24 crc kubenswrapper[4796]: I1125 14:27:24.815197 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:27:24 crc kubenswrapper[4796]: I1125 14:27:24.820462 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:27:25 crc kubenswrapper[4796]: I1125 14:27:25.059746 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:27:35 crc kubenswrapper[4796]: I1125 14:27:35.566718 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6sp9g" Nov 25 14:27:36 crc kubenswrapper[4796]: E1125 14:27:36.408186 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 14:27:36 crc kubenswrapper[4796]: E1125 14:27:36.408639 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ns4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q7hvt_openshift-marketplace(bb9121e4-c300-4964-9021-5fe2ea80802c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:27:36 crc kubenswrapper[4796]: E1125 14:27:36.409936 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q7hvt" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" Nov 25 14:27:42 crc kubenswrapper[4796]: I1125 14:27:42.454988 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:27:44 crc kubenswrapper[4796]: E1125 14:27:44.907221 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 14:27:44 crc kubenswrapper[4796]: E1125 14:27:44.907745 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tvvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4wnxb_openshift-marketplace(c7b5a9d6-081c-4217-8498-19ab1decb386): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:27:44 crc kubenswrapper[4796]: E1125 14:27:44.909052 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4wnxb" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.813513 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 14:27:45 crc kubenswrapper[4796]: E1125 14:27:45.813814 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad795e35-d372-4564-adbf-04646d54d05d" containerName="pruner" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.813832 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad795e35-d372-4564-adbf-04646d54d05d" containerName="pruner" Nov 25 14:27:45 crc kubenswrapper[4796]: E1125 14:27:45.813847 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aa1c4ac-9347-4234-8c46-6a522b18b859" containerName="pruner" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.813855 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aa1c4ac-9347-4234-8c46-6a522b18b859" containerName="pruner" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.814048 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad795e35-d372-4564-adbf-04646d54d05d" containerName="pruner" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.814069 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aa1c4ac-9347-4234-8c46-6a522b18b859" containerName="pruner" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.814549 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.818400 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.819036 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.833327 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.861936 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.862477 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.964307 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.964479 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.964768 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:45 crc kubenswrapper[4796]: I1125 14:27:45.984339 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:46 crc kubenswrapper[4796]: I1125 14:27:46.187545 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:48 crc kubenswrapper[4796]: E1125 14:27:48.328842 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4wnxb" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" Nov 25 14:27:49 crc kubenswrapper[4796]: I1125 14:27:49.514504 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:27:49 crc kubenswrapper[4796]: I1125 14:27:49.514962 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:27:49 crc kubenswrapper[4796]: E1125 14:27:49.583763 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 14:27:49 crc kubenswrapper[4796]: E1125 14:27:49.583984 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h25xt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bbltb_openshift-marketplace(a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:27:49 crc kubenswrapper[4796]: E1125 14:27:49.585218 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bbltb" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" Nov 25 14:27:49 crc kubenswrapper[4796]: E1125 14:27:49.908540 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 14:27:49 crc kubenswrapper[4796]: E1125 14:27:49.908739 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbnfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pxlgd_openshift-marketplace(c1c15ec0-52c3-4420-9ccf-a50630662516): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:27:49 crc kubenswrapper[4796]: E1125 14:27:49.911021 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pxlgd" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.211504 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.212450 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.221143 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.366004 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.366158 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-var-lock\") pod \"installer-9-crc\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.366196 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kube-api-access\") pod \"installer-9-crc\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.467731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-var-lock\") pod \"installer-9-crc\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.467782 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-var-lock\") pod \"installer-9-crc\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.467840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kube-api-access\") pod \"installer-9-crc\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.467934 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.468122 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kubelet-dir\") pod \"installer-9-crc\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.513282 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kube-api-access\") pod \"installer-9-crc\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:50 crc kubenswrapper[4796]: I1125 14:27:50.540248 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.290557 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bbltb" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.291559 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pxlgd" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.327588 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.327883 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhgq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9f5sn_openshift-marketplace(40cd1b96-cac2-46f0-9b1d-4934d6f13087): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.329195 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9f5sn" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.349990 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.350181 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kjmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qqcls_openshift-marketplace(d44b94b1-15d2-48d6-8ae3-bc9787adc1e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.351973 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qqcls" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.495529 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qqcls" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.497818 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9f5sn" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" Nov 25 14:27:53 crc kubenswrapper[4796]: I1125 14:27:53.618610 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 14:27:53 crc kubenswrapper[4796]: I1125 14:27:53.647002 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 14:27:53 crc kubenswrapper[4796]: I1125 14:27:53.716427 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n4f9r"] Nov 25 14:27:53 crc kubenswrapper[4796]: W1125 14:27:53.723183 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda07d588f_1940_4a4b_a4a9_94451e43ec8d.slice/crio-ee1549b1412c2840cbc58c65f3c805164c368a1df7bd3fcd7330c722324b3209 WatchSource:0}: Error finding container ee1549b1412c2840cbc58c65f3c805164c368a1df7bd3fcd7330c722324b3209: Status 404 returned error can't find the container with id ee1549b1412c2840cbc58c65f3c805164c368a1df7bd3fcd7330c722324b3209 Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.833496 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.833699 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p52ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r2vrr_openshift-marketplace(11354fb2-68d2-4e9c-9072-98e9866eb162): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:27:53 crc kubenswrapper[4796]: E1125 14:27:53.836154 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r2vrr" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.502059 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fa86ddd0-4c4c-414a-ae60-48436ff982f1","Type":"ContainerStarted","Data":"8fec8ebe8ca4f6c4c95169608d1fc962909ad0a25508730c0a28302a37615146"} Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.502430 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fa86ddd0-4c4c-414a-ae60-48436ff982f1","Type":"ContainerStarted","Data":"d91f8eccbae7a0cf05bf80598afb2039c5bdb32df007c67d2162b9648f8beddb"} Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.509204 4796 generic.go:334] "Generic (PLEG): container finished" podID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerID="d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4" exitCode=0 Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.509362 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsq6m" event={"ID":"8c3dfd30-55e6-44cf-9657-cff0cc0d2499","Type":"ContainerDied","Data":"d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4"} Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.514402 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" event={"ID":"a07d588f-1940-4a4b-a4a9-94451e43ec8d","Type":"ContainerStarted","Data":"4af6dd8657a9fe9b0fccb74c77fbd0df31082b6e2bc41e1483d308bb5a5b10be"} Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.514452 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" event={"ID":"a07d588f-1940-4a4b-a4a9-94451e43ec8d","Type":"ContainerStarted","Data":"ee1549b1412c2840cbc58c65f3c805164c368a1df7bd3fcd7330c722324b3209"} Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.518278 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf70b233-4a08-40ee-9ae3-42c7f242ba60","Type":"ContainerStarted","Data":"f3431969dd7d13fe68d1767c14fb43905b420964797cb6183cdc2de3fabe2e6f"} Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.518327 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf70b233-4a08-40ee-9ae3-42c7f242ba60","Type":"ContainerStarted","Data":"9ec86047903553383531ce81c0fb1d18a554581dd27c5792384b01e659cd63e1"} Nov 25 14:27:54 crc kubenswrapper[4796]: E1125 14:27:54.520425 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r2vrr" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.532509 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=9.532461773 podStartE2EDuration="9.532461773s" podCreationTimestamp="2025-11-25 14:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:54.528355689 +0000 UTC m=+202.871465123" watchObservedRunningTime="2025-11-25 14:27:54.532461773 +0000 UTC m=+202.875571207" Nov 25 14:27:54 crc kubenswrapper[4796]: I1125 14:27:54.607370 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.607330109 podStartE2EDuration="4.607330109s" podCreationTimestamp="2025-11-25 14:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:54.580507811 +0000 UTC m=+202.923617255" watchObservedRunningTime="2025-11-25 14:27:54.607330109 +0000 UTC m=+202.950439533" Nov 25 14:27:55 crc kubenswrapper[4796]: I1125 14:27:55.528952 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n4f9r" event={"ID":"a07d588f-1940-4a4b-a4a9-94451e43ec8d","Type":"ContainerStarted","Data":"d1bfb6c84e9bb7acc38fa6d2c72fc147e7bed0324ebf4fe9671e0db7029ce86b"} Nov 25 14:27:56 crc kubenswrapper[4796]: I1125 14:27:56.537773 4796 generic.go:334] "Generic (PLEG): container finished" podID="fa86ddd0-4c4c-414a-ae60-48436ff982f1" containerID="8fec8ebe8ca4f6c4c95169608d1fc962909ad0a25508730c0a28302a37615146" exitCode=0 Nov 25 14:27:56 crc kubenswrapper[4796]: I1125 14:27:56.537888 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fa86ddd0-4c4c-414a-ae60-48436ff982f1","Type":"ContainerDied","Data":"8fec8ebe8ca4f6c4c95169608d1fc962909ad0a25508730c0a28302a37615146"} Nov 25 14:27:56 crc kubenswrapper[4796]: I1125 14:27:56.560926 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n4f9r" podStartSLOduration=182.560864825 podStartE2EDuration="3m2.560864825s" podCreationTimestamp="2025-11-25 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:27:56.558440662 +0000 UTC m=+204.901550126" watchObservedRunningTime="2025-11-25 14:27:56.560864825 +0000 UTC m=+204.903974289" Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.549127 4796 generic.go:334] "Generic (PLEG): container finished" podID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerID="aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f" exitCode=0 Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.549213 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hvt" event={"ID":"bb9121e4-c300-4964-9021-5fe2ea80802c","Type":"ContainerDied","Data":"aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f"} Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.558300 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsq6m" event={"ID":"8c3dfd30-55e6-44cf-9657-cff0cc0d2499","Type":"ContainerStarted","Data":"1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0"} Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.626053 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dsq6m" podStartSLOduration=2.593137731 podStartE2EDuration="54.626032838s" podCreationTimestamp="2025-11-25 14:27:03 +0000 UTC" firstStartedPulling="2025-11-25 14:27:05.095800352 +0000 UTC m=+153.438909776" lastFinishedPulling="2025-11-25 14:27:57.128695449 +0000 UTC m=+205.471804883" observedRunningTime="2025-11-25 14:27:57.624973836 +0000 UTC m=+205.968083300" watchObservedRunningTime="2025-11-25 14:27:57.626032838 +0000 UTC m=+205.969142272" Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.870684 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.979618 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kube-api-access\") pod \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\" (UID: \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\") " Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.979708 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kubelet-dir\") pod \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\" (UID: \"fa86ddd0-4c4c-414a-ae60-48436ff982f1\") " Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.979760 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fa86ddd0-4c4c-414a-ae60-48436ff982f1" (UID: "fa86ddd0-4c4c-414a-ae60-48436ff982f1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.979925 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:27:57 crc kubenswrapper[4796]: I1125 14:27:57.985125 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fa86ddd0-4c4c-414a-ae60-48436ff982f1" (UID: "fa86ddd0-4c4c-414a-ae60-48436ff982f1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:27:58 crc kubenswrapper[4796]: I1125 14:27:58.081012 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fa86ddd0-4c4c-414a-ae60-48436ff982f1-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:27:58 crc kubenswrapper[4796]: I1125 14:27:58.563915 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fa86ddd0-4c4c-414a-ae60-48436ff982f1","Type":"ContainerDied","Data":"d91f8eccbae7a0cf05bf80598afb2039c5bdb32df007c67d2162b9648f8beddb"} Nov 25 14:27:58 crc kubenswrapper[4796]: I1125 14:27:58.564184 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91f8eccbae7a0cf05bf80598afb2039c5bdb32df007c67d2162b9648f8beddb" Nov 25 14:27:58 crc kubenswrapper[4796]: I1125 14:27:58.563973 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:27:58 crc kubenswrapper[4796]: I1125 14:27:58.567859 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hvt" event={"ID":"bb9121e4-c300-4964-9021-5fe2ea80802c","Type":"ContainerStarted","Data":"f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59"} Nov 25 14:27:58 crc kubenswrapper[4796]: I1125 14:27:58.595785 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7hvt" podStartSLOduration=2.594368598 podStartE2EDuration="55.595767904s" podCreationTimestamp="2025-11-25 14:27:03 +0000 UTC" firstStartedPulling="2025-11-25 14:27:05.127707924 +0000 UTC m=+153.470817348" lastFinishedPulling="2025-11-25 14:27:58.12910723 +0000 UTC m=+206.472216654" observedRunningTime="2025-11-25 14:27:58.590455234 +0000 UTC m=+206.933564668" watchObservedRunningTime="2025-11-25 14:27:58.595767904 +0000 UTC m=+206.938877338" Nov 25 14:28:03 crc kubenswrapper[4796]: I1125 14:28:03.776525 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:28:03 crc kubenswrapper[4796]: I1125 14:28:03.777196 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:28:04 crc kubenswrapper[4796]: I1125 14:28:04.029404 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:28:04 crc kubenswrapper[4796]: I1125 14:28:04.170146 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:28:04 crc kubenswrapper[4796]: I1125 14:28:04.170311 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:28:04 crc kubenswrapper[4796]: I1125 14:28:04.227454 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:28:04 crc kubenswrapper[4796]: I1125 14:28:04.606432 4796 generic.go:334] "Generic (PLEG): container finished" podID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerID="fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9" exitCode=0 Nov 25 14:28:04 crc kubenswrapper[4796]: I1125 14:28:04.606629 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wnxb" event={"ID":"c7b5a9d6-081c-4217-8498-19ab1decb386","Type":"ContainerDied","Data":"fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9"} Nov 25 14:28:04 crc kubenswrapper[4796]: I1125 14:28:04.663783 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:28:04 crc kubenswrapper[4796]: I1125 14:28:04.685963 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:28:05 crc kubenswrapper[4796]: I1125 14:28:05.618326 4796 generic.go:334] "Generic (PLEG): container finished" podID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerID="7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd" exitCode=0 Nov 25 14:28:05 crc kubenswrapper[4796]: I1125 14:28:05.618410 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxlgd" event={"ID":"c1c15ec0-52c3-4420-9ccf-a50630662516","Type":"ContainerDied","Data":"7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd"} Nov 25 14:28:05 crc kubenswrapper[4796]: I1125 14:28:05.621602 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wnxb" event={"ID":"c7b5a9d6-081c-4217-8498-19ab1decb386","Type":"ContainerStarted","Data":"050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa"} Nov 25 14:28:05 crc kubenswrapper[4796]: I1125 14:28:05.657553 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4wnxb" podStartSLOduration=2.770068112 podStartE2EDuration="1m1.657536265s" podCreationTimestamp="2025-11-25 14:27:04 +0000 UTC" firstStartedPulling="2025-11-25 14:27:06.145317693 +0000 UTC m=+154.488427117" lastFinishedPulling="2025-11-25 14:28:05.032785846 +0000 UTC m=+213.375895270" observedRunningTime="2025-11-25 14:28:05.654644988 +0000 UTC m=+213.997754452" watchObservedRunningTime="2025-11-25 14:28:05.657536265 +0000 UTC m=+214.000645679" Nov 25 14:28:05 crc kubenswrapper[4796]: I1125 14:28:05.666270 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7hvt"] Nov 25 14:28:06 crc kubenswrapper[4796]: I1125 14:28:06.629039 4796 generic.go:334] "Generic (PLEG): container finished" podID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerID="6dbc5efc925c59b157e06b43b0635c5e24d64438af49c3af7696a59055668e3f" exitCode=0 Nov 25 14:28:06 crc kubenswrapper[4796]: I1125 14:28:06.629140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbltb" event={"ID":"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c","Type":"ContainerDied","Data":"6dbc5efc925c59b157e06b43b0635c5e24d64438af49c3af7696a59055668e3f"} Nov 25 14:28:06 crc kubenswrapper[4796]: I1125 14:28:06.631785 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcls" event={"ID":"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3","Type":"ContainerStarted","Data":"b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380"} Nov 25 14:28:06 crc kubenswrapper[4796]: I1125 14:28:06.634373 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxlgd" event={"ID":"c1c15ec0-52c3-4420-9ccf-a50630662516","Type":"ContainerStarted","Data":"6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86"} Nov 25 14:28:06 crc kubenswrapper[4796]: I1125 14:28:06.665926 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxlgd" podStartSLOduration=2.822214258 podStartE2EDuration="1m1.665904346s" podCreationTimestamp="2025-11-25 14:27:05 +0000 UTC" firstStartedPulling="2025-11-25 14:27:07.166333611 +0000 UTC m=+155.509443035" lastFinishedPulling="2025-11-25 14:28:06.010023679 +0000 UTC m=+214.353133123" observedRunningTime="2025-11-25 14:28:06.664660378 +0000 UTC m=+215.007769812" watchObservedRunningTime="2025-11-25 14:28:06.665904346 +0000 UTC m=+215.009013770" Nov 25 14:28:07 crc kubenswrapper[4796]: I1125 14:28:07.641839 4796 generic.go:334] "Generic (PLEG): container finished" podID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerID="b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380" exitCode=0 Nov 25 14:28:07 crc kubenswrapper[4796]: I1125 14:28:07.641938 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcls" event={"ID":"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3","Type":"ContainerDied","Data":"b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380"} Nov 25 14:28:07 crc kubenswrapper[4796]: I1125 14:28:07.644145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbltb" event={"ID":"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c","Type":"ContainerStarted","Data":"a61be15cab07c22060a5f797a643a2e9c05aca81fa52b9296d15d9e4a8eda6f0"} Nov 25 14:28:07 crc kubenswrapper[4796]: I1125 14:28:07.644304 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7hvt" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerName="registry-server" containerID="cri-o://f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59" gracePeriod=2 Nov 25 14:28:07 crc kubenswrapper[4796]: I1125 14:28:07.685052 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bbltb" podStartSLOduration=2.7522849430000003 podStartE2EDuration="1m4.685032191s" podCreationTimestamp="2025-11-25 14:27:03 +0000 UTC" firstStartedPulling="2025-11-25 14:27:05.114962894 +0000 UTC m=+153.458072318" lastFinishedPulling="2025-11-25 14:28:07.047710112 +0000 UTC m=+215.390819566" observedRunningTime="2025-11-25 14:28:07.681854945 +0000 UTC m=+216.024964379" watchObservedRunningTime="2025-11-25 14:28:07.685032191 +0000 UTC m=+216.028141615" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.147153 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.151942 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-catalog-content\") pod \"bb9121e4-c300-4964-9021-5fe2ea80802c\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.152039 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ns4q\" (UniqueName: \"kubernetes.io/projected/bb9121e4-c300-4964-9021-5fe2ea80802c-kube-api-access-2ns4q\") pod \"bb9121e4-c300-4964-9021-5fe2ea80802c\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.152075 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-utilities\") pod \"bb9121e4-c300-4964-9021-5fe2ea80802c\" (UID: \"bb9121e4-c300-4964-9021-5fe2ea80802c\") " Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.153222 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-utilities" (OuterVolumeSpecName: "utilities") pod "bb9121e4-c300-4964-9021-5fe2ea80802c" (UID: "bb9121e4-c300-4964-9021-5fe2ea80802c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.159126 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9121e4-c300-4964-9021-5fe2ea80802c-kube-api-access-2ns4q" (OuterVolumeSpecName: "kube-api-access-2ns4q") pod "bb9121e4-c300-4964-9021-5fe2ea80802c" (UID: "bb9121e4-c300-4964-9021-5fe2ea80802c"). InnerVolumeSpecName "kube-api-access-2ns4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.212520 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb9121e4-c300-4964-9021-5fe2ea80802c" (UID: "bb9121e4-c300-4964-9021-5fe2ea80802c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.252891 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ns4q\" (UniqueName: \"kubernetes.io/projected/bb9121e4-c300-4964-9021-5fe2ea80802c-kube-api-access-2ns4q\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.252924 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.252935 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9121e4-c300-4964-9021-5fe2ea80802c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.650272 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f5sn" event={"ID":"40cd1b96-cac2-46f0-9b1d-4934d6f13087","Type":"ContainerStarted","Data":"79aa553a41413f34b185a6b0e0d83960e4c7f8683e83748d654e1a5eab940088"} Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.652175 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcls" event={"ID":"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3","Type":"ContainerStarted","Data":"43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45"} Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.654821 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7hvt" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.654849 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hvt" event={"ID":"bb9121e4-c300-4964-9021-5fe2ea80802c","Type":"ContainerDied","Data":"f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59"} Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.654805 4796 generic.go:334] "Generic (PLEG): container finished" podID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerID="f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59" exitCode=0 Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.654931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7hvt" event={"ID":"bb9121e4-c300-4964-9021-5fe2ea80802c","Type":"ContainerDied","Data":"6cae5388fb3d9058f724ee076b94602a3f4bb707418082f9f0bfb77f376887d7"} Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.654959 4796 scope.go:117] "RemoveContainer" containerID="f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.675567 4796 scope.go:117] "RemoveContainer" containerID="aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.683557 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7hvt"] Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.687321 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7hvt"] Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.696100 4796 scope.go:117] "RemoveContainer" containerID="7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.703564 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qqcls" podStartSLOduration=10.210338505 podStartE2EDuration="1m2.703549507s" podCreationTimestamp="2025-11-25 14:27:06 +0000 UTC" firstStartedPulling="2025-11-25 14:27:15.752532827 +0000 UTC m=+164.095642261" lastFinishedPulling="2025-11-25 14:28:08.245743819 +0000 UTC m=+216.588853263" observedRunningTime="2025-11-25 14:28:08.700340711 +0000 UTC m=+217.043450135" watchObservedRunningTime="2025-11-25 14:28:08.703549507 +0000 UTC m=+217.046658931" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.724025 4796 scope.go:117] "RemoveContainer" containerID="f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59" Nov 25 14:28:08 crc kubenswrapper[4796]: E1125 14:28:08.724662 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59\": container with ID starting with f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59 not found: ID does not exist" containerID="f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.724721 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59"} err="failed to get container status \"f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59\": rpc error: code = NotFound desc = could not find container \"f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59\": container with ID starting with f4555b90ea7233ef6c9eee8ce22d18a1c707d57df7b7796c0e54ffbec9f96b59 not found: ID does not exist" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.724796 4796 scope.go:117] "RemoveContainer" containerID="aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f" Nov 25 14:28:08 crc kubenswrapper[4796]: E1125 14:28:08.725228 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f\": container with ID starting with aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f not found: ID does not exist" containerID="aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.725249 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f"} err="failed to get container status \"aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f\": rpc error: code = NotFound desc = could not find container \"aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f\": container with ID starting with aec1c7c32ce563a76ece160174fcb3de93ad313389df6f59736afaffefb4978f not found: ID does not exist" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.725262 4796 scope.go:117] "RemoveContainer" containerID="7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af" Nov 25 14:28:08 crc kubenswrapper[4796]: E1125 14:28:08.725700 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af\": container with ID starting with 7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af not found: ID does not exist" containerID="7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af" Nov 25 14:28:08 crc kubenswrapper[4796]: I1125 14:28:08.725723 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af"} err="failed to get container status \"7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af\": rpc error: code = NotFound desc = could not find container \"7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af\": container with ID starting with 7ae723b6b0dc1f92766cab82c6269901f8eda21b97e6963e3b6a28d4bdd2c9af not found: ID does not exist" Nov 25 14:28:09 crc kubenswrapper[4796]: I1125 14:28:09.663369 4796 generic.go:334] "Generic (PLEG): container finished" podID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerID="79aa553a41413f34b185a6b0e0d83960e4c7f8683e83748d654e1a5eab940088" exitCode=0 Nov 25 14:28:09 crc kubenswrapper[4796]: I1125 14:28:09.663406 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f5sn" event={"ID":"40cd1b96-cac2-46f0-9b1d-4934d6f13087","Type":"ContainerDied","Data":"79aa553a41413f34b185a6b0e0d83960e4c7f8683e83748d654e1a5eab940088"} Nov 25 14:28:10 crc kubenswrapper[4796]: I1125 14:28:10.416635 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" path="/var/lib/kubelet/pods/bb9121e4-c300-4964-9021-5fe2ea80802c/volumes" Nov 25 14:28:10 crc kubenswrapper[4796]: I1125 14:28:10.669949 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f5sn" event={"ID":"40cd1b96-cac2-46f0-9b1d-4934d6f13087","Type":"ContainerStarted","Data":"fae9722c56477b3a43acb171173b6f7acbbc2e56d94dfa91932625b9ec8dabf2"} Nov 25 14:28:10 crc kubenswrapper[4796]: I1125 14:28:10.672100 4796 generic.go:334] "Generic (PLEG): container finished" podID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerID="e4e582d17828be9b85a8416fdd69ee8d04301f3202cd7587d414e5a263b7b104" exitCode=0 Nov 25 14:28:10 crc kubenswrapper[4796]: I1125 14:28:10.672140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vrr" event={"ID":"11354fb2-68d2-4e9c-9072-98e9866eb162","Type":"ContainerDied","Data":"e4e582d17828be9b85a8416fdd69ee8d04301f3202cd7587d414e5a263b7b104"} Nov 25 14:28:10 crc kubenswrapper[4796]: I1125 14:28:10.687276 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9f5sn" podStartSLOduration=11.702460782 podStartE2EDuration="1m3.687256463s" podCreationTimestamp="2025-11-25 14:27:07 +0000 UTC" firstStartedPulling="2025-11-25 14:27:18.087472147 +0000 UTC m=+166.430581571" lastFinishedPulling="2025-11-25 14:28:10.072267828 +0000 UTC m=+218.415377252" observedRunningTime="2025-11-25 14:28:10.686114219 +0000 UTC m=+219.029223643" watchObservedRunningTime="2025-11-25 14:28:10.687256463 +0000 UTC m=+219.030365887" Nov 25 14:28:11 crc kubenswrapper[4796]: I1125 14:28:11.678814 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vrr" event={"ID":"11354fb2-68d2-4e9c-9072-98e9866eb162","Type":"ContainerStarted","Data":"5cbf3a671ab1ae18d9c7b2d4703181343b233adc9cc760f4b535004c5f06ee41"} Nov 25 14:28:11 crc kubenswrapper[4796]: I1125 14:28:11.700749 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r2vrr" podStartSLOduration=2.783067724 podStartE2EDuration="1m6.700728597s" podCreationTimestamp="2025-11-25 14:27:05 +0000 UTC" firstStartedPulling="2025-11-25 14:27:07.179114162 +0000 UTC m=+155.522223586" lastFinishedPulling="2025-11-25 14:28:11.096775015 +0000 UTC m=+219.439884459" observedRunningTime="2025-11-25 14:28:11.698220412 +0000 UTC m=+220.041329846" watchObservedRunningTime="2025-11-25 14:28:11.700728597 +0000 UTC m=+220.043838031" Nov 25 14:28:14 crc kubenswrapper[4796]: I1125 14:28:14.019579 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:28:14 crc kubenswrapper[4796]: I1125 14:28:14.020634 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:28:14 crc kubenswrapper[4796]: I1125 14:28:14.059788 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:28:14 crc kubenswrapper[4796]: I1125 14:28:14.365811 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:28:14 crc kubenswrapper[4796]: I1125 14:28:14.365893 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:28:14 crc kubenswrapper[4796]: I1125 14:28:14.407216 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:28:14 crc kubenswrapper[4796]: I1125 14:28:14.731521 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:28:14 crc kubenswrapper[4796]: I1125 14:28:14.741977 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:28:15 crc kubenswrapper[4796]: I1125 14:28:15.743051 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:28:15 crc kubenswrapper[4796]: I1125 14:28:15.743359 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:28:15 crc kubenswrapper[4796]: I1125 14:28:15.786096 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:28:16 crc kubenswrapper[4796]: I1125 14:28:16.064766 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wnxb"] Nov 25 14:28:16 crc kubenswrapper[4796]: I1125 14:28:16.194481 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:28:16 crc kubenswrapper[4796]: I1125 14:28:16.194535 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:28:16 crc kubenswrapper[4796]: I1125 14:28:16.229543 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:28:16 crc kubenswrapper[4796]: I1125 14:28:16.706668 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4wnxb" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerName="registry-server" containerID="cri-o://050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa" gracePeriod=2 Nov 25 14:28:16 crc kubenswrapper[4796]: I1125 14:28:16.754400 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:28:16 crc kubenswrapper[4796]: I1125 14:28:16.759060 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:28:16 crc kubenswrapper[4796]: I1125 14:28:16.995501 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:28:16 crc kubenswrapper[4796]: I1125 14:28:16.995803 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.053607 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.165854 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.299063 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-utilities\") pod \"c7b5a9d6-081c-4217-8498-19ab1decb386\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.299112 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tvvv\" (UniqueName: \"kubernetes.io/projected/c7b5a9d6-081c-4217-8498-19ab1decb386-kube-api-access-6tvvv\") pod \"c7b5a9d6-081c-4217-8498-19ab1decb386\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.299305 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-catalog-content\") pod \"c7b5a9d6-081c-4217-8498-19ab1decb386\" (UID: \"c7b5a9d6-081c-4217-8498-19ab1decb386\") " Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.305784 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b5a9d6-081c-4217-8498-19ab1decb386-kube-api-access-6tvvv" (OuterVolumeSpecName: "kube-api-access-6tvvv") pod "c7b5a9d6-081c-4217-8498-19ab1decb386" (UID: "c7b5a9d6-081c-4217-8498-19ab1decb386"). InnerVolumeSpecName "kube-api-access-6tvvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.322159 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-utilities" (OuterVolumeSpecName: "utilities") pod "c7b5a9d6-081c-4217-8498-19ab1decb386" (UID: "c7b5a9d6-081c-4217-8498-19ab1decb386"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.345651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7b5a9d6-081c-4217-8498-19ab1decb386" (UID: "c7b5a9d6-081c-4217-8498-19ab1decb386"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.394309 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.394361 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.401241 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.401279 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7b5a9d6-081c-4217-8498-19ab1decb386-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.401294 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tvvv\" (UniqueName: \"kubernetes.io/projected/c7b5a9d6-081c-4217-8498-19ab1decb386-kube-api-access-6tvvv\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.442424 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.715111 4796 generic.go:334] "Generic (PLEG): container finished" podID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerID="050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa" exitCode=0 Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.715164 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wnxb" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.715235 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wnxb" event={"ID":"c7b5a9d6-081c-4217-8498-19ab1decb386","Type":"ContainerDied","Data":"050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa"} Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.715263 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wnxb" event={"ID":"c7b5a9d6-081c-4217-8498-19ab1decb386","Type":"ContainerDied","Data":"4ee1d5e6f6200caef9a8e76edd3d7fdbb90d14b87a793bd16ac14a266ebad2fc"} Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.715279 4796 scope.go:117] "RemoveContainer" containerID="050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.733029 4796 scope.go:117] "RemoveContainer" containerID="fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.745214 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wnxb"] Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.748743 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4wnxb"] Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.759453 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.759862 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.761192 4796 scope.go:117] "RemoveContainer" containerID="2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.780841 4796 scope.go:117] "RemoveContainer" containerID="050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa" Nov 25 14:28:17 crc kubenswrapper[4796]: E1125 14:28:17.781606 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa\": container with ID starting with 050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa not found: ID does not exist" containerID="050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.781676 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa"} err="failed to get container status \"050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa\": rpc error: code = NotFound desc = could not find container \"050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa\": container with ID starting with 050c1e94a11e07be5f61aca3334704459c87aa6e731c49804e172b602a43a9aa not found: ID does not exist" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.781734 4796 scope.go:117] "RemoveContainer" containerID="fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9" Nov 25 14:28:17 crc kubenswrapper[4796]: E1125 14:28:17.782172 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9\": container with ID starting with fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9 not found: ID does not exist" containerID="fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.782204 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9"} err="failed to get container status \"fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9\": rpc error: code = NotFound desc = could not find container \"fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9\": container with ID starting with fce9354397ae7fc9a48e66b2708391e71764cc3753b8ae917d6b41f57e5ffae9 not found: ID does not exist" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.782227 4796 scope.go:117] "RemoveContainer" containerID="2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0" Nov 25 14:28:17 crc kubenswrapper[4796]: E1125 14:28:17.782547 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0\": container with ID starting with 2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0 not found: ID does not exist" containerID="2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0" Nov 25 14:28:17 crc kubenswrapper[4796]: I1125 14:28:17.782599 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0"} err="failed to get container status \"2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0\": rpc error: code = NotFound desc = could not find container \"2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0\": container with ID starting with 2fcd7e1c7b5fe2e34ae5e7ff85b7828fd8dd906552b3f83e39339b11be58eaf0 not found: ID does not exist" Nov 25 14:28:18 crc kubenswrapper[4796]: I1125 14:28:18.415177 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" path="/var/lib/kubelet/pods/c7b5a9d6-081c-4217-8498-19ab1decb386/volumes" Nov 25 14:28:18 crc kubenswrapper[4796]: I1125 14:28:18.663770 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vrr"] Nov 25 14:28:18 crc kubenswrapper[4796]: I1125 14:28:18.720649 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r2vrr" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerName="registry-server" containerID="cri-o://5cbf3a671ab1ae18d9c7b2d4703181343b233adc9cc760f4b535004c5f06ee41" gracePeriod=2 Nov 25 14:28:19 crc kubenswrapper[4796]: I1125 14:28:19.514270 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:28:19 crc kubenswrapper[4796]: I1125 14:28:19.515374 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:28:19 crc kubenswrapper[4796]: I1125 14:28:19.515630 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:28:19 crc kubenswrapper[4796]: I1125 14:28:19.516638 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:28:19 crc kubenswrapper[4796]: I1125 14:28:19.516928 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4" gracePeriod=600 Nov 25 14:28:21 crc kubenswrapper[4796]: I1125 14:28:21.066650 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9f5sn"] Nov 25 14:28:21 crc kubenswrapper[4796]: I1125 14:28:21.067250 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9f5sn" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerName="registry-server" containerID="cri-o://fae9722c56477b3a43acb171173b6f7acbbc2e56d94dfa91932625b9ec8dabf2" gracePeriod=2 Nov 25 14:28:21 crc kubenswrapper[4796]: I1125 14:28:21.739207 4796 generic.go:334] "Generic (PLEG): container finished" podID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerID="5cbf3a671ab1ae18d9c7b2d4703181343b233adc9cc760f4b535004c5f06ee41" exitCode=0 Nov 25 14:28:21 crc kubenswrapper[4796]: I1125 14:28:21.739310 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vrr" event={"ID":"11354fb2-68d2-4e9c-9072-98e9866eb162","Type":"ContainerDied","Data":"5cbf3a671ab1ae18d9c7b2d4703181343b233adc9cc760f4b535004c5f06ee41"} Nov 25 14:28:21 crc kubenswrapper[4796]: I1125 14:28:21.741963 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4" exitCode=0 Nov 25 14:28:21 crc kubenswrapper[4796]: I1125 14:28:21.742014 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4"} Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.685992 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.747641 4796 generic.go:334] "Generic (PLEG): container finished" podID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerID="fae9722c56477b3a43acb171173b6f7acbbc2e56d94dfa91932625b9ec8dabf2" exitCode=0 Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.747693 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f5sn" event={"ID":"40cd1b96-cac2-46f0-9b1d-4934d6f13087","Type":"ContainerDied","Data":"fae9722c56477b3a43acb171173b6f7acbbc2e56d94dfa91932625b9ec8dabf2"} Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.749444 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vrr" event={"ID":"11354fb2-68d2-4e9c-9072-98e9866eb162","Type":"ContainerDied","Data":"4c53b6f1024589aadc8c6ada33fbe9cee79ad965d4680147892ca213299dabbd"} Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.749469 4796 scope.go:117] "RemoveContainer" containerID="5cbf3a671ab1ae18d9c7b2d4703181343b233adc9cc760f4b535004c5f06ee41" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.749564 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2vrr" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.770348 4796 scope.go:117] "RemoveContainer" containerID="e4e582d17828be9b85a8416fdd69ee8d04301f3202cd7587d414e5a263b7b104" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.781256 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p52ds\" (UniqueName: \"kubernetes.io/projected/11354fb2-68d2-4e9c-9072-98e9866eb162-kube-api-access-p52ds\") pod \"11354fb2-68d2-4e9c-9072-98e9866eb162\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.781300 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-utilities\") pod \"11354fb2-68d2-4e9c-9072-98e9866eb162\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.781332 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-catalog-content\") pod \"11354fb2-68d2-4e9c-9072-98e9866eb162\" (UID: \"11354fb2-68d2-4e9c-9072-98e9866eb162\") " Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.782358 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-utilities" (OuterVolumeSpecName: "utilities") pod "11354fb2-68d2-4e9c-9072-98e9866eb162" (UID: "11354fb2-68d2-4e9c-9072-98e9866eb162"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.789856 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11354fb2-68d2-4e9c-9072-98e9866eb162-kube-api-access-p52ds" (OuterVolumeSpecName: "kube-api-access-p52ds") pod "11354fb2-68d2-4e9c-9072-98e9866eb162" (UID: "11354fb2-68d2-4e9c-9072-98e9866eb162"). InnerVolumeSpecName "kube-api-access-p52ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.801317 4796 scope.go:117] "RemoveContainer" containerID="d697a652cef86b40e775e43969a5010165c5251e4ee07b3ce48572f5b90ecf5f" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.807283 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11354fb2-68d2-4e9c-9072-98e9866eb162" (UID: "11354fb2-68d2-4e9c-9072-98e9866eb162"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.882437 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p52ds\" (UniqueName: \"kubernetes.io/projected/11354fb2-68d2-4e9c-9072-98e9866eb162-kube-api-access-p52ds\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.882468 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:22 crc kubenswrapper[4796]: I1125 14:28:22.882477 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11354fb2-68d2-4e9c-9072-98e9866eb162-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.075567 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vrr"] Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.079415 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vrr"] Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.466601 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.590813 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-catalog-content\") pod \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.591254 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhgq9\" (UniqueName: \"kubernetes.io/projected/40cd1b96-cac2-46f0-9b1d-4934d6f13087-kube-api-access-rhgq9\") pod \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.591356 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-utilities\") pod \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\" (UID: \"40cd1b96-cac2-46f0-9b1d-4934d6f13087\") " Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.592998 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-utilities" (OuterVolumeSpecName: "utilities") pod "40cd1b96-cac2-46f0-9b1d-4934d6f13087" (UID: "40cd1b96-cac2-46f0-9b1d-4934d6f13087"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.596841 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40cd1b96-cac2-46f0-9b1d-4934d6f13087-kube-api-access-rhgq9" (OuterVolumeSpecName: "kube-api-access-rhgq9") pod "40cd1b96-cac2-46f0-9b1d-4934d6f13087" (UID: "40cd1b96-cac2-46f0-9b1d-4934d6f13087"). InnerVolumeSpecName "kube-api-access-rhgq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.693396 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40cd1b96-cac2-46f0-9b1d-4934d6f13087" (UID: "40cd1b96-cac2-46f0-9b1d-4934d6f13087"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.693591 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.693611 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhgq9\" (UniqueName: \"kubernetes.io/projected/40cd1b96-cac2-46f0-9b1d-4934d6f13087-kube-api-access-rhgq9\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.693624 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40cd1b96-cac2-46f0-9b1d-4934d6f13087-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.757471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"6d416e75f99f56c789cfd2d95656cdac196835b00819700290456ea5dec1fb66"} Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.762289 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9f5sn" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.762292 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9f5sn" event={"ID":"40cd1b96-cac2-46f0-9b1d-4934d6f13087","Type":"ContainerDied","Data":"a331d34a77374d250e253d7499c1d6328e54998d5af45a7574ebce1f1025e5e5"} Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.762376 4796 scope.go:117] "RemoveContainer" containerID="fae9722c56477b3a43acb171173b6f7acbbc2e56d94dfa91932625b9ec8dabf2" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.791352 4796 scope.go:117] "RemoveContainer" containerID="79aa553a41413f34b185a6b0e0d83960e4c7f8683e83748d654e1a5eab940088" Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.806303 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9f5sn"] Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.806498 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9f5sn"] Nov 25 14:28:23 crc kubenswrapper[4796]: I1125 14:28:23.836612 4796 scope.go:117] "RemoveContainer" containerID="5c21340a9282e03f3575e59e52118f318627c68ba479107b590cec22bd4a2999" Nov 25 14:28:24 crc kubenswrapper[4796]: I1125 14:28:24.423231 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" path="/var/lib/kubelet/pods/11354fb2-68d2-4e9c-9072-98e9866eb162/volumes" Nov 25 14:28:24 crc kubenswrapper[4796]: I1125 14:28:24.424877 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" path="/var/lib/kubelet/pods/40cd1b96-cac2-46f0-9b1d-4934d6f13087/volumes" Nov 25 14:28:24 crc kubenswrapper[4796]: I1125 14:28:24.787237 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dr5s9"] Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.076559 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077483 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerName="extract-content" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077504 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerName="extract-content" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077528 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077541 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077563 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerName="extract-content" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077575 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerName="extract-content" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077622 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerName="extract-utilities" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077635 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerName="extract-utilities" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077654 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerName="extract-utilities" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077665 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerName="extract-utilities" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077681 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa86ddd0-4c4c-414a-ae60-48436ff982f1" containerName="pruner" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077693 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa86ddd0-4c4c-414a-ae60-48436ff982f1" containerName="pruner" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077709 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerName="extract-content" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077721 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerName="extract-content" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077736 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerName="extract-utilities" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077749 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerName="extract-utilities" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077765 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerName="extract-utilities" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077777 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerName="extract-utilities" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077796 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerName="extract-content" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077808 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerName="extract-content" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077823 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077835 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077857 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077869 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.077886 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.077897 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.078085 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b5a9d6-081c-4217-8498-19ab1decb386" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.078108 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="11354fb2-68d2-4e9c-9072-98e9866eb162" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.078128 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cd1b96-cac2-46f0-9b1d-4934d6f13087" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.078143 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9121e4-c300-4964-9021-5fe2ea80802c" containerName="registry-server" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.078156 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa86ddd0-4c4c-414a-ae60-48436ff982f1" containerName="pruner" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.078681 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.078864 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.079149 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885" gracePeriod=15 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.079254 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d" gracePeriod=15 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.079306 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f" gracePeriod=15 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.079325 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f" gracePeriod=15 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.079476 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50" gracePeriod=15 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.080533 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.080821 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.080842 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.080859 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.080871 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.080899 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.080910 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.080926 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.080980 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.081000 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081015 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.081030 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081042 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.081058 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081070 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081258 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081279 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081296 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081321 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081337 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081353 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 14:28:32 crc kubenswrapper[4796]: E1125 14:28:32.081539 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081553 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.081770 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.216925 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.217384 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.217430 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.217472 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.217510 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.217541 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.217562 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.217594 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.318783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.318837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.318877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.318882 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.318932 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.318938 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.318971 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.318972 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.318987 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.319028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.319007 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.319050 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.319072 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.319134 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.319034 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.319164 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.415219 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.835707 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.838453 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.839831 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50" exitCode=0 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.839875 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d" exitCode=0 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.839890 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f" exitCode=0 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.839903 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f" exitCode=2 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.839971 4796 scope.go:117] "RemoveContainer" containerID="f9832155fbfbc4b78fb0b27a5ae91ad4faff461d8231cf76eac8b710f6b20fe1" Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.842207 4796 generic.go:334] "Generic (PLEG): container finished" podID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" containerID="f3431969dd7d13fe68d1767c14fb43905b420964797cb6183cdc2de3fabe2e6f" exitCode=0 Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.842253 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf70b233-4a08-40ee-9ae3-42c7f242ba60","Type":"ContainerDied","Data":"f3431969dd7d13fe68d1767c14fb43905b420964797cb6183cdc2de3fabe2e6f"} Nov 25 14:28:32 crc kubenswrapper[4796]: I1125 14:28:32.843396 4796 status_manager.go:851] "Failed to get status for pod" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:33 crc kubenswrapper[4796]: E1125 14:28:33.181995 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:33 crc kubenswrapper[4796]: E1125 14:28:33.183028 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:33 crc kubenswrapper[4796]: E1125 14:28:33.183491 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:33 crc kubenswrapper[4796]: E1125 14:28:33.184050 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:33 crc kubenswrapper[4796]: E1125 14:28:33.184526 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:33 crc kubenswrapper[4796]: I1125 14:28:33.184576 4796 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 14:28:33 crc kubenswrapper[4796]: E1125 14:28:33.185065 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="200ms" Nov 25 14:28:33 crc kubenswrapper[4796]: E1125 14:28:33.386801 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="400ms" Nov 25 14:28:33 crc kubenswrapper[4796]: E1125 14:28:33.787952 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="800ms" Nov 25 14:28:33 crc kubenswrapper[4796]: I1125 14:28:33.855487 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.283769 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.286924 4796 status_manager.go:851] "Failed to get status for pod" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.347745 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kube-api-access\") pod \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.348082 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kubelet-dir\") pod \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.348169 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-var-lock\") pod \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\" (UID: \"cf70b233-4a08-40ee-9ae3-42c7f242ba60\") " Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.348504 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-var-lock" (OuterVolumeSpecName: "var-lock") pod "cf70b233-4a08-40ee-9ae3-42c7f242ba60" (UID: "cf70b233-4a08-40ee-9ae3-42c7f242ba60"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.348628 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf70b233-4a08-40ee-9ae3-42c7f242ba60" (UID: "cf70b233-4a08-40ee-9ae3-42c7f242ba60"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.365853 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf70b233-4a08-40ee-9ae3-42c7f242ba60" (UID: "cf70b233-4a08-40ee-9ae3-42c7f242ba60"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.449993 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.450054 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.450068 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf70b233-4a08-40ee-9ae3-42c7f242ba60-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.474739 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.475718 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.476858 4796 status_manager.go:851] "Failed to get status for pod" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.477505 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.550797 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.551012 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.551109 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.550920 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.551367 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.551511 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:28:34 crc kubenswrapper[4796]: E1125 14:28:34.589426 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="1.6s" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.652421 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.652470 4796 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.652489 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:34 crc kubenswrapper[4796]: E1125 14:28:34.836678 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:28:34Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:28:34Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:28:34Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:28:34Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: E1125 14:28:34.837313 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: E1125 14:28:34.837916 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: E1125 14:28:34.838466 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: E1125 14:28:34.838970 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: E1125 14:28:34.839006 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.864314 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"cf70b233-4a08-40ee-9ae3-42c7f242ba60","Type":"ContainerDied","Data":"9ec86047903553383531ce81c0fb1d18a554581dd27c5792384b01e659cd63e1"} Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.864352 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.864376 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec86047903553383531ce81c0fb1d18a554581dd27c5792384b01e659cd63e1" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.869850 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.870299 4796 status_manager.go:851] "Failed to get status for pod" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.870851 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.872277 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885" exitCode=0 Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.872361 4796 scope.go:117] "RemoveContainer" containerID="71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.872388 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.899928 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.900297 4796 scope.go:117] "RemoveContainer" containerID="d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.900396 4796 status_manager.go:851] "Failed to get status for pod" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.922145 4796 scope.go:117] "RemoveContainer" containerID="8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.945148 4796 scope.go:117] "RemoveContainer" containerID="bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.964197 4796 scope.go:117] "RemoveContainer" containerID="66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885" Nov 25 14:28:34 crc kubenswrapper[4796]: I1125 14:28:34.987666 4796 scope.go:117] "RemoveContainer" containerID="11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.016728 4796 scope.go:117] "RemoveContainer" containerID="71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50" Nov 25 14:28:35 crc kubenswrapper[4796]: E1125 14:28:35.017372 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\": container with ID starting with 71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50 not found: ID does not exist" containerID="71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.017449 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50"} err="failed to get container status \"71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\": rpc error: code = NotFound desc = could not find container \"71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50\": container with ID starting with 71f6ae1fdeb7ef03b7e634bbe728367cc7427b7995f929052f785f706f8afd50 not found: ID does not exist" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.017502 4796 scope.go:117] "RemoveContainer" containerID="d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d" Nov 25 14:28:35 crc kubenswrapper[4796]: E1125 14:28:35.018209 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\": container with ID starting with d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d not found: ID does not exist" containerID="d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.018250 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d"} err="failed to get container status \"d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\": rpc error: code = NotFound desc = could not find container \"d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d\": container with ID starting with d1139353d6cc2cccaa07f235a53a2f6c55784c37a07f39a2912e3f6d4c7cbc7d not found: ID does not exist" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.018277 4796 scope.go:117] "RemoveContainer" containerID="8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f" Nov 25 14:28:35 crc kubenswrapper[4796]: E1125 14:28:35.018818 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\": container with ID starting with 8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f not found: ID does not exist" containerID="8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.018882 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f"} err="failed to get container status \"8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\": rpc error: code = NotFound desc = could not find container \"8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f\": container with ID starting with 8dd0c5583cb782308e314d53f971c3e1fd92fe3a588ac616fbcc503a98f2979f not found: ID does not exist" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.018935 4796 scope.go:117] "RemoveContainer" containerID="bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f" Nov 25 14:28:35 crc kubenswrapper[4796]: E1125 14:28:35.019416 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\": container with ID starting with bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f not found: ID does not exist" containerID="bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.019479 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f"} err="failed to get container status \"bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\": rpc error: code = NotFound desc = could not find container \"bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f\": container with ID starting with bc597f78c9f3d31b025683ff7ff24e7e0c71d3e6afe930cf3ef2b1de11d0dd1f not found: ID does not exist" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.019518 4796 scope.go:117] "RemoveContainer" containerID="66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885" Nov 25 14:28:35 crc kubenswrapper[4796]: E1125 14:28:35.020741 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\": container with ID starting with 66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885 not found: ID does not exist" containerID="66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.020947 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885"} err="failed to get container status \"66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\": rpc error: code = NotFound desc = could not find container \"66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885\": container with ID starting with 66d95ab92a7f6fa6ef1b0b8497a0e112371d8c443f41d9621ea905d85fdc7885 not found: ID does not exist" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.021185 4796 scope.go:117] "RemoveContainer" containerID="11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808" Nov 25 14:28:35 crc kubenswrapper[4796]: E1125 14:28:35.022152 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\": container with ID starting with 11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808 not found: ID does not exist" containerID="11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808" Nov 25 14:28:35 crc kubenswrapper[4796]: I1125 14:28:35.022361 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808"} err="failed to get container status \"11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\": rpc error: code = NotFound desc = could not find container \"11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808\": container with ID starting with 11c0d357032e8c51b357727b82edea753cfa5ed91c66f6f2ab9881f358a8a808 not found: ID does not exist" Nov 25 14:28:36 crc kubenswrapper[4796]: E1125 14:28:36.190318 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="3.2s" Nov 25 14:28:36 crc kubenswrapper[4796]: I1125 14:28:36.421402 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 14:28:37 crc kubenswrapper[4796]: E1125 14:28:37.130443 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:37 crc kubenswrapper[4796]: I1125 14:28:37.131121 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:37 crc kubenswrapper[4796]: W1125 14:28:37.170157 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-beef206d71cee50697818b5040a03a6114f896604e5c2cc138110f180da5b1b6 WatchSource:0}: Error finding container beef206d71cee50697818b5040a03a6114f896604e5c2cc138110f180da5b1b6: Status 404 returned error can't find the container with id beef206d71cee50697818b5040a03a6114f896604e5c2cc138110f180da5b1b6 Nov 25 14:28:37 crc kubenswrapper[4796]: E1125 14:28:37.175313 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b463f673bec13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 14:28:37.174545427 +0000 UTC m=+245.517654881,LastTimestamp:2025-11-25 14:28:37.174545427 +0000 UTC m=+245.517654881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 14:28:37 crc kubenswrapper[4796]: I1125 14:28:37.895645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5"} Nov 25 14:28:37 crc kubenswrapper[4796]: I1125 14:28:37.896041 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"beef206d71cee50697818b5040a03a6114f896604e5c2cc138110f180da5b1b6"} Nov 25 14:28:37 crc kubenswrapper[4796]: E1125 14:28:37.896808 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:28:37 crc kubenswrapper[4796]: I1125 14:28:37.896825 4796 status_manager.go:851] "Failed to get status for pod" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:39 crc kubenswrapper[4796]: E1125 14:28:39.392450 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.227:6443: connect: connection refused" interval="6.4s" Nov 25 14:28:40 crc kubenswrapper[4796]: E1125 14:28:40.495561 4796 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" volumeName="registry-storage" Nov 25 14:28:42 crc kubenswrapper[4796]: I1125 14:28:42.414896 4796 status_manager.go:851] "Failed to get status for pod" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:43 crc kubenswrapper[4796]: E1125 14:28:43.651054 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.227:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b463f673bec13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 14:28:37.174545427 +0000 UTC m=+245.517654881,LastTimestamp:2025-11-25 14:28:37.174545427 +0000 UTC m=+245.517654881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.409249 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.410548 4796 status_manager.go:851] "Failed to get status for pod" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.433034 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.433085 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:44 crc kubenswrapper[4796]: E1125 14:28:44.433657 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.434297 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:44 crc kubenswrapper[4796]: W1125 14:28:44.466490 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-6349061d2413191a686552600dfcbb7e077d28f52f961e6b77d7c6af016b43a6 WatchSource:0}: Error finding container 6349061d2413191a686552600dfcbb7e077d28f52f961e6b77d7c6af016b43a6: Status 404 returned error can't find the container with id 6349061d2413191a686552600dfcbb7e077d28f52f961e6b77d7c6af016b43a6 Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.945101 4796 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1d2fd28e36496a766567bb888f2139ce94a00cb5933193fd7a0ca0681c71f878" exitCode=0 Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.945208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1d2fd28e36496a766567bb888f2139ce94a00cb5933193fd7a0ca0681c71f878"} Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.945472 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6349061d2413191a686552600dfcbb7e077d28f52f961e6b77d7c6af016b43a6"} Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.945738 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.945754 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:44 crc kubenswrapper[4796]: E1125 14:28:44.946148 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:44 crc kubenswrapper[4796]: I1125 14:28:44.946339 4796 status_manager.go:851] "Failed to get status for pod" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.227:6443: connect: connection refused" Nov 25 14:28:45 crc kubenswrapper[4796]: I1125 14:28:45.965783 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5c8e17cb0bd28e8a3d61691e1d024a3dc468d186cdbeb94a54bacd8203e6853b"} Nov 25 14:28:45 crc kubenswrapper[4796]: I1125 14:28:45.966096 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d026d01886d8b3c19a4bae50df523def81d82ba354a56b30066d3d6af5ff7623"} Nov 25 14:28:45 crc kubenswrapper[4796]: I1125 14:28:45.966109 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c57003c3abd4ec29030f7ef071d102170972a5855edc63542130b924a5415e6d"} Nov 25 14:28:46 crc kubenswrapper[4796]: I1125 14:28:46.973367 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 14:28:46 crc kubenswrapper[4796]: I1125 14:28:46.973688 4796 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4" exitCode=1 Nov 25 14:28:46 crc kubenswrapper[4796]: I1125 14:28:46.973748 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4"} Nov 25 14:28:46 crc kubenswrapper[4796]: I1125 14:28:46.974210 4796 scope.go:117] "RemoveContainer" containerID="f153bed6bec08b97670e24cef01f63c773a926e5146bcf2cb874b5d88215cba4" Nov 25 14:28:46 crc kubenswrapper[4796]: I1125 14:28:46.979421 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7655703e2ff102ef8a275c0639c1ea51a27371977f680afa67f216706ee7ed7f"} Nov 25 14:28:46 crc kubenswrapper[4796]: I1125 14:28:46.979466 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6eec5288761062093cab2d8b81da60df75ce6340fdbf2aca31866410cc092ec1"} Nov 25 14:28:46 crc kubenswrapper[4796]: I1125 14:28:46.979809 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:46 crc kubenswrapper[4796]: I1125 14:28:46.979826 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:46 crc kubenswrapper[4796]: I1125 14:28:46.980083 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:47 crc kubenswrapper[4796]: I1125 14:28:47.987845 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 14:28:47 crc kubenswrapper[4796]: I1125 14:28:47.987892 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a7e29ffbc28caff410788115fbe8b90d91877ae8cdebeeaf1b650c2bdf2a467e"} Nov 25 14:28:49 crc kubenswrapper[4796]: I1125 14:28:49.434947 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:49 crc kubenswrapper[4796]: I1125 14:28:49.435027 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:49 crc kubenswrapper[4796]: I1125 14:28:49.441989 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:49 crc kubenswrapper[4796]: I1125 14:28:49.815076 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" podUID="76da93ba-dcf4-4f52-982f-ce98a9718cc8" containerName="oauth-openshift" containerID="cri-o://17768babcdf83fc3b3730d7427490272737fb4a258e70d95118fce3c42527648" gracePeriod=15 Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.002941 4796 generic.go:334] "Generic (PLEG): container finished" podID="76da93ba-dcf4-4f52-982f-ce98a9718cc8" containerID="17768babcdf83fc3b3730d7427490272737fb4a258e70d95118fce3c42527648" exitCode=0 Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.002993 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" event={"ID":"76da93ba-dcf4-4f52-982f-ce98a9718cc8","Type":"ContainerDied","Data":"17768babcdf83fc3b3730d7427490272737fb4a258e70d95118fce3c42527648"} Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.256105 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391335 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-dir\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391462 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-ocp-branding-template\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391478 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391537 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-error\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391665 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-session\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391721 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-trusted-ca-bundle\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391786 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-policies\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391824 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgk4s\" (UniqueName: \"kubernetes.io/projected/76da93ba-dcf4-4f52-982f-ce98a9718cc8-kube-api-access-bgk4s\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391868 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-cliconfig\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.391925 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-router-certs\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.392818 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-login\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.393376 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.393402 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.393650 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.393788 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-serving-cert\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.394524 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-idp-0-file-data\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.394619 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-provider-selection\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.394678 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-service-ca\") pod \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\" (UID: \"76da93ba-dcf4-4f52-982f-ce98a9718cc8\") " Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.395170 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.395215 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.395241 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.395264 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.395744 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.400827 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.401419 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.401517 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76da93ba-dcf4-4f52-982f-ce98a9718cc8-kube-api-access-bgk4s" (OuterVolumeSpecName: "kube-api-access-bgk4s") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "kube-api-access-bgk4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.402001 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.402850 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.403550 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.404247 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.405275 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.406857 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "76da93ba-dcf4-4f52-982f-ce98a9718cc8" (UID: "76da93ba-dcf4-4f52-982f-ce98a9718cc8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.498406 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.499001 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.499472 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.499487 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgk4s\" (UniqueName: \"kubernetes.io/projected/76da93ba-dcf4-4f52-982f-ce98a9718cc8-kube-api-access-bgk4s\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.499529 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.499543 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.499556 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.499585 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.499599 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:50 crc kubenswrapper[4796]: I1125 14:28:50.499612 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/76da93ba-dcf4-4f52-982f-ce98a9718cc8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:28:51 crc kubenswrapper[4796]: I1125 14:28:51.009609 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" event={"ID":"76da93ba-dcf4-4f52-982f-ce98a9718cc8","Type":"ContainerDied","Data":"09a4846608331bc352acf3042e5691efc5141b2ce5a8575101acb8ac8aebead6"} Nov 25 14:28:51 crc kubenswrapper[4796]: I1125 14:28:51.009672 4796 scope.go:117] "RemoveContainer" containerID="17768babcdf83fc3b3730d7427490272737fb4a258e70d95118fce3c42527648" Nov 25 14:28:51 crc kubenswrapper[4796]: I1125 14:28:51.009685 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dr5s9" Nov 25 14:28:51 crc kubenswrapper[4796]: I1125 14:28:51.757375 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:28:51 crc kubenswrapper[4796]: I1125 14:28:51.766743 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:28:52 crc kubenswrapper[4796]: I1125 14:28:52.012303 4796 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:52 crc kubenswrapper[4796]: I1125 14:28:52.016839 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:28:52 crc kubenswrapper[4796]: E1125 14:28:52.298627 4796 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Nov 25 14:28:52 crc kubenswrapper[4796]: I1125 14:28:52.427441 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e1de9d81-35f0-4487-abc0-d255f9f74590" Nov 25 14:28:53 crc kubenswrapper[4796]: I1125 14:28:53.024298 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:53 crc kubenswrapper[4796]: I1125 14:28:53.024342 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:53 crc kubenswrapper[4796]: I1125 14:28:53.027091 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e1de9d81-35f0-4487-abc0-d255f9f74590" Nov 25 14:28:53 crc kubenswrapper[4796]: I1125 14:28:53.028763 4796 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://c57003c3abd4ec29030f7ef071d102170972a5855edc63542130b924a5415e6d" Nov 25 14:28:53 crc kubenswrapper[4796]: I1125 14:28:53.028803 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:28:54 crc kubenswrapper[4796]: I1125 14:28:54.038645 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:54 crc kubenswrapper[4796]: I1125 14:28:54.039149 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:28:54 crc kubenswrapper[4796]: I1125 14:28:54.042939 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e1de9d81-35f0-4487-abc0-d255f9f74590" Nov 25 14:29:02 crc kubenswrapper[4796]: I1125 14:29:02.193802 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 14:29:03 crc kubenswrapper[4796]: I1125 14:29:03.096190 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 14:29:03 crc kubenswrapper[4796]: I1125 14:29:03.190001 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 14:29:03 crc kubenswrapper[4796]: I1125 14:29:03.461644 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 14:29:03 crc kubenswrapper[4796]: I1125 14:29:03.506244 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 14:29:03 crc kubenswrapper[4796]: I1125 14:29:03.528514 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 14:29:03 crc kubenswrapper[4796]: I1125 14:29:03.538359 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 14:29:03 crc kubenswrapper[4796]: I1125 14:29:03.699973 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 14:29:03 crc kubenswrapper[4796]: I1125 14:29:03.722265 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.008723 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.013757 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.190095 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.209899 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.271772 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.346779 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.361591 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.402989 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.511811 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.639668 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.680129 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.764286 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.895166 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 14:29:04 crc kubenswrapper[4796]: I1125 14:29:04.924850 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 14:29:05 crc kubenswrapper[4796]: I1125 14:29:05.098112 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 14:29:05 crc kubenswrapper[4796]: I1125 14:29:05.123166 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 14:29:05 crc kubenswrapper[4796]: I1125 14:29:05.251315 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 14:29:05 crc kubenswrapper[4796]: I1125 14:29:05.407937 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 14:29:05 crc kubenswrapper[4796]: I1125 14:29:05.502567 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 14:29:05 crc kubenswrapper[4796]: I1125 14:29:05.511488 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 14:29:05 crc kubenswrapper[4796]: I1125 14:29:05.539277 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 14:29:05 crc kubenswrapper[4796]: I1125 14:29:05.810555 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 14:29:05 crc kubenswrapper[4796]: I1125 14:29:05.813425 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.007858 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.145475 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.216314 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.374442 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.443703 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.631787 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.732817 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.833953 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.834109 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.897986 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.993226 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 14:29:06 crc kubenswrapper[4796]: I1125 14:29:06.999606 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.051148 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.052505 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.083611 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.114310 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.146531 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.149251 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.176695 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.510743 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.830184 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.852460 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 14:29:07 crc kubenswrapper[4796]: I1125 14:29:07.999227 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.010838 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.076491 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.095350 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.209035 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.247184 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.274840 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.343828 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.346157 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.372069 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.447313 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.497837 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.502431 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.549766 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.591877 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.671329 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.672948 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 14:29:08 crc kubenswrapper[4796]: I1125 14:29:08.979787 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.077959 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.084387 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.174220 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.185433 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.217144 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.280907 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.281974 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.287409 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.309163 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.330985 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.352381 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.455118 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.495322 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.582332 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.741039 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.777801 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.853436 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.895471 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 14:29:09 crc kubenswrapper[4796]: I1125 14:29:09.926719 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.094014 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.131896 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.199375 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.244163 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.270537 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.311936 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.339850 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.416385 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.426573 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.466713 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.470521 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.484102 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.633888 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.653641 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.696247 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.775057 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.792097 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.853653 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.867388 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.942069 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 14:29:10 crc kubenswrapper[4796]: I1125 14:29:10.997173 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.019511 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.059996 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.091055 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.189454 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.269876 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.298823 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.323982 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.358250 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.358780 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.363443 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.387057 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.410328 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.411414 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.428830 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.482202 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.541272 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.599275 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.638732 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.770486 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.819197 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.831153 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.876192 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 14:29:11 crc kubenswrapper[4796]: I1125 14:29:11.924833 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.015021 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.025121 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.090302 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.106244 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.160090 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.210547 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.219178 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.337356 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.465018 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.519965 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.534888 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.541091 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.558220 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.571097 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.582728 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.627333 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.678007 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.741005 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.789322 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.870857 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.949724 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.979211 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 14:29:12 crc kubenswrapper[4796]: I1125 14:29:12.986966 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.013726 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.049281 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.090711 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.219838 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.264217 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.431133 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.455374 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.462663 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.538526 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.598985 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.620552 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.792870 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 14:29:13 crc kubenswrapper[4796]: I1125 14:29:13.920045 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.109560 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.228680 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.242678 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.247038 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.275239 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.319516 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.387866 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.492923 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.512768 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.548124 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.571560 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.585624 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.690364 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.717151 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.803973 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.818738 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.913044 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 14:29:14 crc kubenswrapper[4796]: I1125 14:29:14.944769 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 14:29:15 crc kubenswrapper[4796]: I1125 14:29:15.068863 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 14:29:15 crc kubenswrapper[4796]: I1125 14:29:15.105602 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 14:29:15 crc kubenswrapper[4796]: I1125 14:29:15.377911 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:29:15 crc kubenswrapper[4796]: I1125 14:29:15.542293 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 14:29:15 crc kubenswrapper[4796]: I1125 14:29:15.680745 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 14:29:15 crc kubenswrapper[4796]: I1125 14:29:15.695344 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 14:29:15 crc kubenswrapper[4796]: I1125 14:29:15.736746 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 14:29:15 crc kubenswrapper[4796]: I1125 14:29:15.859308 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 14:29:15 crc kubenswrapper[4796]: I1125 14:29:15.934715 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.022071 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.039471 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.049548 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.091244 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.128702 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.143709 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.214484 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.238284 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.242052 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.387441 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.510853 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.521630 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.524507 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.713864 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.935719 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 14:29:16 crc kubenswrapper[4796]: I1125 14:29:16.960612 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.015200 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.018273 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.078719 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.081733 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.147712 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.152476 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.164378 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.351735 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.366359 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.500776 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.508690 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-dr5s9"] Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.508773 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-86f4ddc759-5x2f4"] Nov 25 14:29:17 crc kubenswrapper[4796]: E1125 14:29:17.509043 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" containerName="installer" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.509069 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" containerName="installer" Nov 25 14:29:17 crc kubenswrapper[4796]: E1125 14:29:17.509097 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76da93ba-dcf4-4f52-982f-ce98a9718cc8" containerName="oauth-openshift" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.509110 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="76da93ba-dcf4-4f52-982f-ce98a9718cc8" containerName="oauth-openshift" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.509282 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf70b233-4a08-40ee-9ae3-42c7f242ba60" containerName="installer" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.509299 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="76da93ba-dcf4-4f52-982f-ce98a9718cc8" containerName="oauth-openshift" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.509562 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.509644 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b4b5234d-a2f9-45ac-87ba-8637e0672dd4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.510077 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.513762 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.514898 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.516189 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.516735 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.516775 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.516838 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.516883 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.516791 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.516906 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.518234 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.518522 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.518605 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.521355 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.526960 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.533320 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.541097 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.590132 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.590104564 podStartE2EDuration="25.590104564s" podCreationTimestamp="2025-11-25 14:28:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:29:17.580873935 +0000 UTC m=+285.923983409" watchObservedRunningTime="2025-11-25 14:29:17.590104564 +0000 UTC m=+285.933214028" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615242 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615363 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-service-ca\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615412 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-router-certs\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615470 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-template-login\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615506 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwg5q\" (UniqueName: \"kubernetes.io/projected/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-kube-api-access-jwg5q\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615550 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-template-error\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615714 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-audit-dir\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615750 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-session\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615795 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615917 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-audit-policies\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.615983 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.616017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.675168 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.684295 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.716985 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-service-ca\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717132 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-router-certs\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717175 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-template-login\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717212 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwg5q\" (UniqueName: \"kubernetes.io/projected/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-kube-api-access-jwg5q\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717283 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-template-error\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717352 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-audit-dir\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717400 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-session\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717465 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717520 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717609 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717667 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-audit-policies\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717748 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.717798 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.718303 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-service-ca\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.719077 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.719169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-audit-dir\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.720176 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.720411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-audit-policies\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.724670 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-session\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.724950 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.724908 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-template-error\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.725202 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.725234 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-router-certs\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.726350 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.726903 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-template-login\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.730825 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.741120 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwg5q\" (UniqueName: \"kubernetes.io/projected/fc38779e-9660-4c24-a74f-a3ea52cc1fc9-kube-api-access-jwg5q\") pod \"oauth-openshift-86f4ddc759-5x2f4\" (UID: \"fc38779e-9660-4c24-a74f-a3ea52cc1fc9\") " pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.835406 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.879666 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 14:29:17 crc kubenswrapper[4796]: I1125 14:29:17.952000 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 14:29:18 crc kubenswrapper[4796]: I1125 14:29:18.259725 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 14:29:18 crc kubenswrapper[4796]: I1125 14:29:18.269793 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86f4ddc759-5x2f4"] Nov 25 14:29:18 crc kubenswrapper[4796]: I1125 14:29:18.419654 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76da93ba-dcf4-4f52-982f-ce98a9718cc8" path="/var/lib/kubelet/pods/76da93ba-dcf4-4f52-982f-ce98a9718cc8/volumes" Nov 25 14:29:18 crc kubenswrapper[4796]: I1125 14:29:18.461142 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 14:29:18 crc kubenswrapper[4796]: I1125 14:29:18.764551 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 14:29:18 crc kubenswrapper[4796]: I1125 14:29:18.951938 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 14:29:18 crc kubenswrapper[4796]: I1125 14:29:18.954262 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 14:29:19 crc kubenswrapper[4796]: I1125 14:29:19.026033 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 14:29:19 crc kubenswrapper[4796]: I1125 14:29:19.213955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" event={"ID":"fc38779e-9660-4c24-a74f-a3ea52cc1fc9","Type":"ContainerStarted","Data":"c45d715c16fedfda4c8e58f9c74fd89d7e9a89adcaad234e23903699d722e45d"} Nov 25 14:29:19 crc kubenswrapper[4796]: I1125 14:29:19.214023 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" event={"ID":"fc38779e-9660-4c24-a74f-a3ea52cc1fc9","Type":"ContainerStarted","Data":"1159af945e9bd59c35974c43fab46bd7433468bfb63ad9cf295afa812ac0a681"} Nov 25 14:29:19 crc kubenswrapper[4796]: I1125 14:29:19.214649 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:19 crc kubenswrapper[4796]: I1125 14:29:19.246041 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" Nov 25 14:29:19 crc kubenswrapper[4796]: I1125 14:29:19.276189 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-86f4ddc759-5x2f4" podStartSLOduration=55.276170085 podStartE2EDuration="55.276170085s" podCreationTimestamp="2025-11-25 14:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:29:19.245108365 +0000 UTC m=+287.588217779" watchObservedRunningTime="2025-11-25 14:29:19.276170085 +0000 UTC m=+287.619279509" Nov 25 14:29:19 crc kubenswrapper[4796]: I1125 14:29:19.389088 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 14:29:19 crc kubenswrapper[4796]: I1125 14:29:19.442303 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 14:29:19 crc kubenswrapper[4796]: I1125 14:29:19.840757 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 14:29:21 crc kubenswrapper[4796]: I1125 14:29:21.780232 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 14:29:25 crc kubenswrapper[4796]: I1125 14:29:25.845716 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 14:29:25 crc kubenswrapper[4796]: I1125 14:29:25.846515 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5" gracePeriod=5 Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.007893 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.008696 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103081 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103197 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103220 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103242 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103300 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103300 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103352 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103390 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103356 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103792 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103805 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103813 4796 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.103824 4796 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.115783 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.204668 4796 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.291378 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.291435 4796 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5" exitCode=137 Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.291476 4796 scope.go:117] "RemoveContainer" containerID="3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.291617 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.318078 4796 scope.go:117] "RemoveContainer" containerID="3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5" Nov 25 14:29:31 crc kubenswrapper[4796]: E1125 14:29:31.318447 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5\": container with ID starting with 3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5 not found: ID does not exist" containerID="3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5" Nov 25 14:29:31 crc kubenswrapper[4796]: I1125 14:29:31.318491 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5"} err="failed to get container status \"3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5\": rpc error: code = NotFound desc = could not find container \"3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5\": container with ID starting with 3b43d58fbaa886173dcf17534c5f2445100ae76ce78b82fcd2b8ea61a127e9f5 not found: ID does not exist" Nov 25 14:29:32 crc kubenswrapper[4796]: I1125 14:29:32.423081 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.255811 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vbgn5"] Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.256505 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" podUID="8b75fc2c-7703-4bee-9e6b-6ea32511fc42" containerName="controller-manager" containerID="cri-o://c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3" gracePeriod=30 Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.362984 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55"] Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.363197 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" podUID="0d8de494-9c7a-47e6-afa1-47007836acd8" containerName="route-controller-manager" containerID="cri-o://e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b" gracePeriod=30 Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.607971 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.722623 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.786725 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wz82\" (UniqueName: \"kubernetes.io/projected/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-kube-api-access-4wz82\") pod \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.786780 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-config\") pod \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.786806 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-serving-cert\") pod \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.786888 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-proxy-ca-bundles\") pod \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.786949 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-client-ca\") pod \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\" (UID: \"8b75fc2c-7703-4bee-9e6b-6ea32511fc42\") " Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.787794 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b75fc2c-7703-4bee-9e6b-6ea32511fc42" (UID: "8b75fc2c-7703-4bee-9e6b-6ea32511fc42"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.787861 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-config" (OuterVolumeSpecName: "config") pod "8b75fc2c-7703-4bee-9e6b-6ea32511fc42" (UID: "8b75fc2c-7703-4bee-9e6b-6ea32511fc42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.788308 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b75fc2c-7703-4bee-9e6b-6ea32511fc42" (UID: "8b75fc2c-7703-4bee-9e6b-6ea32511fc42"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.793879 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-kube-api-access-4wz82" (OuterVolumeSpecName: "kube-api-access-4wz82") pod "8b75fc2c-7703-4bee-9e6b-6ea32511fc42" (UID: "8b75fc2c-7703-4bee-9e6b-6ea32511fc42"). InnerVolumeSpecName "kube-api-access-4wz82". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.795018 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b75fc2c-7703-4bee-9e6b-6ea32511fc42" (UID: "8b75fc2c-7703-4bee-9e6b-6ea32511fc42"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.887917 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-client-ca\") pod \"0d8de494-9c7a-47e6-afa1-47007836acd8\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.887999 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8de494-9c7a-47e6-afa1-47007836acd8-serving-cert\") pod \"0d8de494-9c7a-47e6-afa1-47007836acd8\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.888040 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-config\") pod \"0d8de494-9c7a-47e6-afa1-47007836acd8\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.888134 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzhhb\" (UniqueName: \"kubernetes.io/projected/0d8de494-9c7a-47e6-afa1-47007836acd8-kube-api-access-qzhhb\") pod \"0d8de494-9c7a-47e6-afa1-47007836acd8\" (UID: \"0d8de494-9c7a-47e6-afa1-47007836acd8\") " Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.888425 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wz82\" (UniqueName: \"kubernetes.io/projected/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-kube-api-access-4wz82\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.888449 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.888466 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.888481 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.888496 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b75fc2c-7703-4bee-9e6b-6ea32511fc42-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.888966 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d8de494-9c7a-47e6-afa1-47007836acd8" (UID: "0d8de494-9c7a-47e6-afa1-47007836acd8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.889161 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-config" (OuterVolumeSpecName: "config") pod "0d8de494-9c7a-47e6-afa1-47007836acd8" (UID: "0d8de494-9c7a-47e6-afa1-47007836acd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.894980 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8de494-9c7a-47e6-afa1-47007836acd8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d8de494-9c7a-47e6-afa1-47007836acd8" (UID: "0d8de494-9c7a-47e6-afa1-47007836acd8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.897588 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8de494-9c7a-47e6-afa1-47007836acd8-kube-api-access-qzhhb" (OuterVolumeSpecName: "kube-api-access-qzhhb") pod "0d8de494-9c7a-47e6-afa1-47007836acd8" (UID: "0d8de494-9c7a-47e6-afa1-47007836acd8"). InnerVolumeSpecName "kube-api-access-qzhhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.990260 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d8de494-9c7a-47e6-afa1-47007836acd8-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.990318 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.990338 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzhhb\" (UniqueName: \"kubernetes.io/projected/0d8de494-9c7a-47e6-afa1-47007836acd8-kube-api-access-qzhhb\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:43 crc kubenswrapper[4796]: I1125 14:29:43.990356 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d8de494-9c7a-47e6-afa1-47007836acd8-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.273619 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s"] Nov 25 14:29:44 crc kubenswrapper[4796]: E1125 14:29:44.274217 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8de494-9c7a-47e6-afa1-47007836acd8" containerName="route-controller-manager" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.274237 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8de494-9c7a-47e6-afa1-47007836acd8" containerName="route-controller-manager" Nov 25 14:29:44 crc kubenswrapper[4796]: E1125 14:29:44.274270 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b75fc2c-7703-4bee-9e6b-6ea32511fc42" containerName="controller-manager" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.274283 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b75fc2c-7703-4bee-9e6b-6ea32511fc42" containerName="controller-manager" Nov 25 14:29:44 crc kubenswrapper[4796]: E1125 14:29:44.274307 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.274320 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.274505 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b75fc2c-7703-4bee-9e6b-6ea32511fc42" containerName="controller-manager" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.274523 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8de494-9c7a-47e6-afa1-47007836acd8" containerName="route-controller-manager" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.274547 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.275103 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.304724 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48f3fd1c-dd98-47de-b672-d1ed663e73a7-serving-cert\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.305359 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9cx\" (UniqueName: \"kubernetes.io/projected/48f3fd1c-dd98-47de-b672-d1ed663e73a7-kube-api-access-7h9cx\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.305478 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-client-ca\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.305520 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-config\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.306214 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s"] Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.379170 4796 generic.go:334] "Generic (PLEG): container finished" podID="0d8de494-9c7a-47e6-afa1-47007836acd8" containerID="e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b" exitCode=0 Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.379263 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" event={"ID":"0d8de494-9c7a-47e6-afa1-47007836acd8","Type":"ContainerDied","Data":"e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b"} Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.379297 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" event={"ID":"0d8de494-9c7a-47e6-afa1-47007836acd8","Type":"ContainerDied","Data":"85acb8728a36217f493b15770a20bec981d51c3b5e0e888791d41b430ab40d80"} Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.379317 4796 scope.go:117] "RemoveContainer" containerID="e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.379344 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.382268 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.382275 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" event={"ID":"8b75fc2c-7703-4bee-9e6b-6ea32511fc42","Type":"ContainerDied","Data":"c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3"} Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.386307 4796 generic.go:334] "Generic (PLEG): container finished" podID="8b75fc2c-7703-4bee-9e6b-6ea32511fc42" containerID="c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3" exitCode=0 Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.386414 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vbgn5" event={"ID":"8b75fc2c-7703-4bee-9e6b-6ea32511fc42","Type":"ContainerDied","Data":"e480e82554d122c238f5e52ef172c4abd6ce85e85ba7b982886014a424488321"} Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.406702 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48f3fd1c-dd98-47de-b672-d1ed663e73a7-serving-cert\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.406777 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h9cx\" (UniqueName: \"kubernetes.io/projected/48f3fd1c-dd98-47de-b672-d1ed663e73a7-kube-api-access-7h9cx\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.406837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-client-ca\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.406870 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-config\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.408859 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-config\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.408871 4796 scope.go:117] "RemoveContainer" containerID="e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.410071 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-client-ca\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: E1125 14:29:44.417770 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b\": container with ID starting with e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b not found: ID does not exist" containerID="e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.417847 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b"} err="failed to get container status \"e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b\": rpc error: code = NotFound desc = could not find container \"e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b\": container with ID starting with e7db92ac79699209a60b2cca04a7833200162c51e9e8f6ba4c40ebdfc6640d3b not found: ID does not exist" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.417894 4796 scope.go:117] "RemoveContainer" containerID="c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.419714 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48f3fd1c-dd98-47de-b672-d1ed663e73a7-serving-cert\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.437337 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h9cx\" (UniqueName: \"kubernetes.io/projected/48f3fd1c-dd98-47de-b672-d1ed663e73a7-kube-api-access-7h9cx\") pod \"route-controller-manager-56d66966bb-n9z6s\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.448773 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vbgn5"] Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.460133 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vbgn5"] Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.475454 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55"] Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.485091 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-6tp55"] Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.486485 4796 scope.go:117] "RemoveContainer" containerID="c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3" Nov 25 14:29:44 crc kubenswrapper[4796]: E1125 14:29:44.487252 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3\": container with ID starting with c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3 not found: ID does not exist" containerID="c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.487404 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3"} err="failed to get container status \"c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3\": rpc error: code = NotFound desc = could not find container \"c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3\": container with ID starting with c6cfc4151dc6d87e91b5d5433b102c96d183e24b15190452a7362e114a2ecaa3 not found: ID does not exist" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.627439 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:44 crc kubenswrapper[4796]: I1125 14:29:44.861707 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s"] Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.305324 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f68b6db44-l5gvv"] Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.305934 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.308491 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.309205 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.309227 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.309405 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.309493 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.309537 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.324707 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.341390 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f68b6db44-l5gvv"] Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.396471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" event={"ID":"48f3fd1c-dd98-47de-b672-d1ed663e73a7","Type":"ContainerStarted","Data":"8e717000564f8ce4c3e65ed60bd763f46ebb1464e5b45daf644c2bf48deab221"} Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.396536 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" event={"ID":"48f3fd1c-dd98-47de-b672-d1ed663e73a7","Type":"ContainerStarted","Data":"7e9ca9abc380c2650232d7c7805dcd5aec1f574f440cb96839fac68ef575e44f"} Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.396966 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.419239 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wxd9\" (UniqueName: \"kubernetes.io/projected/179c6044-747b-42a5-9b7f-22fd53cbf868-kube-api-access-5wxd9\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.419311 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/179c6044-747b-42a5-9b7f-22fd53cbf868-config\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.419364 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/179c6044-747b-42a5-9b7f-22fd53cbf868-serving-cert\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.419518 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/179c6044-747b-42a5-9b7f-22fd53cbf868-proxy-ca-bundles\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.419595 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/179c6044-747b-42a5-9b7f-22fd53cbf868-client-ca\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.422935 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" podStartSLOduration=1.422921157 podStartE2EDuration="1.422921157s" podCreationTimestamp="2025-11-25 14:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:29:45.418921336 +0000 UTC m=+313.762030810" watchObservedRunningTime="2025-11-25 14:29:45.422921157 +0000 UTC m=+313.766030591" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.474522 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.521434 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wxd9\" (UniqueName: \"kubernetes.io/projected/179c6044-747b-42a5-9b7f-22fd53cbf868-kube-api-access-5wxd9\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.521520 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/179c6044-747b-42a5-9b7f-22fd53cbf868-config\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.521565 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/179c6044-747b-42a5-9b7f-22fd53cbf868-serving-cert\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.521677 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/179c6044-747b-42a5-9b7f-22fd53cbf868-proxy-ca-bundles\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.521722 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/179c6044-747b-42a5-9b7f-22fd53cbf868-client-ca\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.523268 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/179c6044-747b-42a5-9b7f-22fd53cbf868-client-ca\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.523273 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/179c6044-747b-42a5-9b7f-22fd53cbf868-proxy-ca-bundles\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.523966 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/179c6044-747b-42a5-9b7f-22fd53cbf868-config\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.529394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/179c6044-747b-42a5-9b7f-22fd53cbf868-serving-cert\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.539350 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wxd9\" (UniqueName: \"kubernetes.io/projected/179c6044-747b-42a5-9b7f-22fd53cbf868-kube-api-access-5wxd9\") pod \"controller-manager-5f68b6db44-l5gvv\" (UID: \"179c6044-747b-42a5-9b7f-22fd53cbf868\") " pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.631692 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:45 crc kubenswrapper[4796]: I1125 14:29:45.832982 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f68b6db44-l5gvv"] Nov 25 14:29:45 crc kubenswrapper[4796]: W1125 14:29:45.835225 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod179c6044_747b_42a5_9b7f_22fd53cbf868.slice/crio-2877b1308003e51d70ec251974ddf5a6351c1d9e2a22c215301748c4853613df WatchSource:0}: Error finding container 2877b1308003e51d70ec251974ddf5a6351c1d9e2a22c215301748c4853613df: Status 404 returned error can't find the container with id 2877b1308003e51d70ec251974ddf5a6351c1d9e2a22c215301748c4853613df Nov 25 14:29:46 crc kubenswrapper[4796]: I1125 14:29:46.402569 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" event={"ID":"179c6044-747b-42a5-9b7f-22fd53cbf868","Type":"ContainerStarted","Data":"dfa46f9bb0d23814fe4007f6640ccfbe80252b3cdcb1f54330c0054de5a68f65"} Nov 25 14:29:46 crc kubenswrapper[4796]: I1125 14:29:46.404717 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:46 crc kubenswrapper[4796]: I1125 14:29:46.404763 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" event={"ID":"179c6044-747b-42a5-9b7f-22fd53cbf868","Type":"ContainerStarted","Data":"2877b1308003e51d70ec251974ddf5a6351c1d9e2a22c215301748c4853613df"} Nov 25 14:29:46 crc kubenswrapper[4796]: I1125 14:29:46.406894 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" Nov 25 14:29:46 crc kubenswrapper[4796]: I1125 14:29:46.414930 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d8de494-9c7a-47e6-afa1-47007836acd8" path="/var/lib/kubelet/pods/0d8de494-9c7a-47e6-afa1-47007836acd8/volumes" Nov 25 14:29:46 crc kubenswrapper[4796]: I1125 14:29:46.415701 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b75fc2c-7703-4bee-9e6b-6ea32511fc42" path="/var/lib/kubelet/pods/8b75fc2c-7703-4bee-9e6b-6ea32511fc42/volumes" Nov 25 14:29:46 crc kubenswrapper[4796]: I1125 14:29:46.425609 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f68b6db44-l5gvv" podStartSLOduration=3.42558763 podStartE2EDuration="3.42558763s" podCreationTimestamp="2025-11-25 14:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:29:46.422160556 +0000 UTC m=+314.765269980" watchObservedRunningTime="2025-11-25 14:29:46.42558763 +0000 UTC m=+314.768697064" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.166354 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv"] Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.167795 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.174009 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.185772 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.196352 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv"] Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.318407 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3678f38-14e6-4551-855d-271f89aeaf3b-secret-volume\") pod \"collect-profiles-29401350-ktltv\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.318473 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mv9v\" (UniqueName: \"kubernetes.io/projected/b3678f38-14e6-4551-855d-271f89aeaf3b-kube-api-access-5mv9v\") pod \"collect-profiles-29401350-ktltv\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.318501 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3678f38-14e6-4551-855d-271f89aeaf3b-config-volume\") pod \"collect-profiles-29401350-ktltv\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.420085 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3678f38-14e6-4551-855d-271f89aeaf3b-secret-volume\") pod \"collect-profiles-29401350-ktltv\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.420461 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mv9v\" (UniqueName: \"kubernetes.io/projected/b3678f38-14e6-4551-855d-271f89aeaf3b-kube-api-access-5mv9v\") pod \"collect-profiles-29401350-ktltv\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.420904 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3678f38-14e6-4551-855d-271f89aeaf3b-config-volume\") pod \"collect-profiles-29401350-ktltv\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.421801 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3678f38-14e6-4551-855d-271f89aeaf3b-config-volume\") pod \"collect-profiles-29401350-ktltv\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.433488 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3678f38-14e6-4551-855d-271f89aeaf3b-secret-volume\") pod \"collect-profiles-29401350-ktltv\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.445690 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mv9v\" (UniqueName: \"kubernetes.io/projected/b3678f38-14e6-4551-855d-271f89aeaf3b-kube-api-access-5mv9v\") pod \"collect-profiles-29401350-ktltv\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.495488 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:00 crc kubenswrapper[4796]: I1125 14:30:00.920707 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv"] Nov 25 14:30:00 crc kubenswrapper[4796]: W1125 14:30:00.925085 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3678f38_14e6_4551_855d_271f89aeaf3b.slice/crio-b0fef97f1f635d994cefa75a010a74b161ab476663f50fcc08ac766486e41718 WatchSource:0}: Error finding container b0fef97f1f635d994cefa75a010a74b161ab476663f50fcc08ac766486e41718: Status 404 returned error can't find the container with id b0fef97f1f635d994cefa75a010a74b161ab476663f50fcc08ac766486e41718 Nov 25 14:30:01 crc kubenswrapper[4796]: I1125 14:30:01.503350 4796 generic.go:334] "Generic (PLEG): container finished" podID="b3678f38-14e6-4551-855d-271f89aeaf3b" containerID="e0189a49dfa3c8639c71a8ca067188b734ab3305a564f607530ea74535c33ebf" exitCode=0 Nov 25 14:30:01 crc kubenswrapper[4796]: I1125 14:30:01.503392 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" event={"ID":"b3678f38-14e6-4551-855d-271f89aeaf3b","Type":"ContainerDied","Data":"e0189a49dfa3c8639c71a8ca067188b734ab3305a564f607530ea74535c33ebf"} Nov 25 14:30:01 crc kubenswrapper[4796]: I1125 14:30:01.503418 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" event={"ID":"b3678f38-14e6-4551-855d-271f89aeaf3b","Type":"ContainerStarted","Data":"b0fef97f1f635d994cefa75a010a74b161ab476663f50fcc08ac766486e41718"} Nov 25 14:30:02 crc kubenswrapper[4796]: I1125 14:30:02.973400 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.152124 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3678f38-14e6-4551-855d-271f89aeaf3b-secret-volume\") pod \"b3678f38-14e6-4551-855d-271f89aeaf3b\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.152201 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mv9v\" (UniqueName: \"kubernetes.io/projected/b3678f38-14e6-4551-855d-271f89aeaf3b-kube-api-access-5mv9v\") pod \"b3678f38-14e6-4551-855d-271f89aeaf3b\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.152288 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3678f38-14e6-4551-855d-271f89aeaf3b-config-volume\") pod \"b3678f38-14e6-4551-855d-271f89aeaf3b\" (UID: \"b3678f38-14e6-4551-855d-271f89aeaf3b\") " Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.153540 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3678f38-14e6-4551-855d-271f89aeaf3b-config-volume" (OuterVolumeSpecName: "config-volume") pod "b3678f38-14e6-4551-855d-271f89aeaf3b" (UID: "b3678f38-14e6-4551-855d-271f89aeaf3b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.165807 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3678f38-14e6-4551-855d-271f89aeaf3b-kube-api-access-5mv9v" (OuterVolumeSpecName: "kube-api-access-5mv9v") pod "b3678f38-14e6-4551-855d-271f89aeaf3b" (UID: "b3678f38-14e6-4551-855d-271f89aeaf3b"). InnerVolumeSpecName "kube-api-access-5mv9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.165831 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3678f38-14e6-4551-855d-271f89aeaf3b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b3678f38-14e6-4551-855d-271f89aeaf3b" (UID: "b3678f38-14e6-4551-855d-271f89aeaf3b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.223189 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s"] Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.223447 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" podUID="48f3fd1c-dd98-47de-b672-d1ed663e73a7" containerName="route-controller-manager" containerID="cri-o://8e717000564f8ce4c3e65ed60bd763f46ebb1464e5b45daf644c2bf48deab221" gracePeriod=30 Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.254189 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3678f38-14e6-4551-855d-271f89aeaf3b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.254230 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mv9v\" (UniqueName: \"kubernetes.io/projected/b3678f38-14e6-4551-855d-271f89aeaf3b-kube-api-access-5mv9v\") on node \"crc\" DevicePath \"\"" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.254240 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3678f38-14e6-4551-855d-271f89aeaf3b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.518511 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.518540 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv" event={"ID":"b3678f38-14e6-4551-855d-271f89aeaf3b","Type":"ContainerDied","Data":"b0fef97f1f635d994cefa75a010a74b161ab476663f50fcc08ac766486e41718"} Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.518866 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0fef97f1f635d994cefa75a010a74b161ab476663f50fcc08ac766486e41718" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.520002 4796 generic.go:334] "Generic (PLEG): container finished" podID="48f3fd1c-dd98-47de-b672-d1ed663e73a7" containerID="8e717000564f8ce4c3e65ed60bd763f46ebb1464e5b45daf644c2bf48deab221" exitCode=0 Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.520051 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" event={"ID":"48f3fd1c-dd98-47de-b672-d1ed663e73a7","Type":"ContainerDied","Data":"8e717000564f8ce4c3e65ed60bd763f46ebb1464e5b45daf644c2bf48deab221"} Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.611316 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.760119 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-config\") pod \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.760207 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-client-ca\") pod \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.760286 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48f3fd1c-dd98-47de-b672-d1ed663e73a7-serving-cert\") pod \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.760389 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h9cx\" (UniqueName: \"kubernetes.io/projected/48f3fd1c-dd98-47de-b672-d1ed663e73a7-kube-api-access-7h9cx\") pod \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\" (UID: \"48f3fd1c-dd98-47de-b672-d1ed663e73a7\") " Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.761165 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-config" (OuterVolumeSpecName: "config") pod "48f3fd1c-dd98-47de-b672-d1ed663e73a7" (UID: "48f3fd1c-dd98-47de-b672-d1ed663e73a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.761209 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "48f3fd1c-dd98-47de-b672-d1ed663e73a7" (UID: "48f3fd1c-dd98-47de-b672-d1ed663e73a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.765774 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f3fd1c-dd98-47de-b672-d1ed663e73a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48f3fd1c-dd98-47de-b672-d1ed663e73a7" (UID: "48f3fd1c-dd98-47de-b672-d1ed663e73a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.765850 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f3fd1c-dd98-47de-b672-d1ed663e73a7-kube-api-access-7h9cx" (OuterVolumeSpecName: "kube-api-access-7h9cx") pod "48f3fd1c-dd98-47de-b672-d1ed663e73a7" (UID: "48f3fd1c-dd98-47de-b672-d1ed663e73a7"). InnerVolumeSpecName "kube-api-access-7h9cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.861220 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h9cx\" (UniqueName: \"kubernetes.io/projected/48f3fd1c-dd98-47de-b672-d1ed663e73a7-kube-api-access-7h9cx\") on node \"crc\" DevicePath \"\"" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.861256 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.861271 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48f3fd1c-dd98-47de-b672-d1ed663e73a7-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:30:03 crc kubenswrapper[4796]: I1125 14:30:03.861282 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48f3fd1c-dd98-47de-b672-d1ed663e73a7-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.320966 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25"] Nov 25 14:30:04 crc kubenswrapper[4796]: E1125 14:30:04.321197 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3678f38-14e6-4551-855d-271f89aeaf3b" containerName="collect-profiles" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.321212 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3678f38-14e6-4551-855d-271f89aeaf3b" containerName="collect-profiles" Nov 25 14:30:04 crc kubenswrapper[4796]: E1125 14:30:04.321228 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f3fd1c-dd98-47de-b672-d1ed663e73a7" containerName="route-controller-manager" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.321236 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f3fd1c-dd98-47de-b672-d1ed663e73a7" containerName="route-controller-manager" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.321355 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f3fd1c-dd98-47de-b672-d1ed663e73a7" containerName="route-controller-manager" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.321374 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3678f38-14e6-4551-855d-271f89aeaf3b" containerName="collect-profiles" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.321879 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.342025 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25"] Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.469958 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5tq\" (UniqueName: \"kubernetes.io/projected/155bd33c-8418-4c2d-a106-d0db10bcebd7-kube-api-access-kc5tq\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.470370 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/155bd33c-8418-4c2d-a106-d0db10bcebd7-client-ca\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.470424 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/155bd33c-8418-4c2d-a106-d0db10bcebd7-serving-cert\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.470510 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/155bd33c-8418-4c2d-a106-d0db10bcebd7-config\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.527499 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" event={"ID":"48f3fd1c-dd98-47de-b672-d1ed663e73a7","Type":"ContainerDied","Data":"7e9ca9abc380c2650232d7c7805dcd5aec1f574f440cb96839fac68ef575e44f"} Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.527788 4796 scope.go:117] "RemoveContainer" containerID="8e717000564f8ce4c3e65ed60bd763f46ebb1464e5b45daf644c2bf48deab221" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.527614 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.553616 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s"] Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.558106 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56d66966bb-n9z6s"] Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.571978 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc5tq\" (UniqueName: \"kubernetes.io/projected/155bd33c-8418-4c2d-a106-d0db10bcebd7-kube-api-access-kc5tq\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.572107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/155bd33c-8418-4c2d-a106-d0db10bcebd7-client-ca\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.573889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/155bd33c-8418-4c2d-a106-d0db10bcebd7-client-ca\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.574059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/155bd33c-8418-4c2d-a106-d0db10bcebd7-serving-cert\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.574179 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/155bd33c-8418-4c2d-a106-d0db10bcebd7-config\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.577665 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/155bd33c-8418-4c2d-a106-d0db10bcebd7-config\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.578292 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/155bd33c-8418-4c2d-a106-d0db10bcebd7-serving-cert\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.592504 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc5tq\" (UniqueName: \"kubernetes.io/projected/155bd33c-8418-4c2d-a106-d0db10bcebd7-kube-api-access-kc5tq\") pod \"route-controller-manager-6b559fd44c-72r25\" (UID: \"155bd33c-8418-4c2d-a106-d0db10bcebd7\") " pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:04 crc kubenswrapper[4796]: I1125 14:30:04.649230 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:05 crc kubenswrapper[4796]: I1125 14:30:05.104706 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25"] Nov 25 14:30:05 crc kubenswrapper[4796]: I1125 14:30:05.535612 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" event={"ID":"155bd33c-8418-4c2d-a106-d0db10bcebd7","Type":"ContainerStarted","Data":"db605cb9e3d42269cd2c3b29b4437861367e8aa9ec71fb0a6e59d243b66d3ce0"} Nov 25 14:30:05 crc kubenswrapper[4796]: I1125 14:30:05.535697 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" event={"ID":"155bd33c-8418-4c2d-a106-d0db10bcebd7","Type":"ContainerStarted","Data":"786a73da18e4e4d82c8ec5320fc84bbdf588d3f93c0c8e267999fa39c29f644d"} Nov 25 14:30:05 crc kubenswrapper[4796]: I1125 14:30:05.537164 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:05 crc kubenswrapper[4796]: I1125 14:30:05.559200 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" podStartSLOduration=2.559175796 podStartE2EDuration="2.559175796s" podCreationTimestamp="2025-11-25 14:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:30:05.55611901 +0000 UTC m=+333.899228474" watchObservedRunningTime="2025-11-25 14:30:05.559175796 +0000 UTC m=+333.902285250" Nov 25 14:30:06 crc kubenswrapper[4796]: I1125 14:30:06.000851 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b559fd44c-72r25" Nov 25 14:30:06 crc kubenswrapper[4796]: I1125 14:30:06.417349 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f3fd1c-dd98-47de-b672-d1ed663e73a7" path="/var/lib/kubelet/pods/48f3fd1c-dd98-47de-b672-d1ed663e73a7/volumes" Nov 25 14:30:25 crc kubenswrapper[4796]: I1125 14:30:25.936379 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4cfrf"] Nov 25 14:30:25 crc kubenswrapper[4796]: I1125 14:30:25.938109 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:25 crc kubenswrapper[4796]: I1125 14:30:25.958908 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4cfrf"] Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.082803 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-bound-sa-token\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.082864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-trusted-ca\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.082896 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-registry-tls\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.083043 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.083126 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.083164 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.083249 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfdj9\" (UniqueName: \"kubernetes.io/projected/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-kube-api-access-hfdj9\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.083281 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-registry-certificates\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.105301 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.187050 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-trusted-ca\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.185293 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-trusted-ca\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.187164 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-registry-tls\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.187208 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.187243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.187282 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfdj9\" (UniqueName: \"kubernetes.io/projected/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-kube-api-access-hfdj9\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.187306 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-registry-certificates\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.187352 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-bound-sa-token\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.188383 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.189030 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-registry-certificates\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.195031 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.202751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-registry-tls\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.206013 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfdj9\" (UniqueName: \"kubernetes.io/projected/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-kube-api-access-hfdj9\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.214042 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fa3b91a-44c9-4867-9dc4-899d6c2d79e2-bound-sa-token\") pod \"image-registry-66df7c8f76-4cfrf\" (UID: \"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2\") " pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.261794 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:26 crc kubenswrapper[4796]: I1125 14:30:26.752872 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4cfrf"] Nov 25 14:30:27 crc kubenswrapper[4796]: I1125 14:30:27.693308 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" event={"ID":"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2","Type":"ContainerStarted","Data":"e159c1eb7c64b2dfe45eb8cc4d2db5744f2b7e9ff832dc656374599174dc5dad"} Nov 25 14:30:27 crc kubenswrapper[4796]: I1125 14:30:27.693862 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" event={"ID":"0fa3b91a-44c9-4867-9dc4-899d6c2d79e2","Type":"ContainerStarted","Data":"23ceddda90756dc26bbb29951676159fad8e673ae31edecb1368998e557b4749"} Nov 25 14:30:27 crc kubenswrapper[4796]: I1125 14:30:27.693913 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:27 crc kubenswrapper[4796]: I1125 14:30:27.713172 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" podStartSLOduration=2.713147866 podStartE2EDuration="2.713147866s" podCreationTimestamp="2025-11-25 14:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:30:27.712599967 +0000 UTC m=+356.055709411" watchObservedRunningTime="2025-11-25 14:30:27.713147866 +0000 UTC m=+356.056257300" Nov 25 14:30:46 crc kubenswrapper[4796]: I1125 14:30:46.267659 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4cfrf" Nov 25 14:30:46 crc kubenswrapper[4796]: I1125 14:30:46.321362 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-95xvf"] Nov 25 14:30:49 crc kubenswrapper[4796]: I1125 14:30:49.514535 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:30:49 crc kubenswrapper[4796]: I1125 14:30:49.515742 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.331019 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbltb"] Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.331905 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bbltb" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerName="registry-server" containerID="cri-o://a61be15cab07c22060a5f797a643a2e9c05aca81fa52b9296d15d9e4a8eda6f0" gracePeriod=30 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.348454 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dsq6m"] Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.348712 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dsq6m" podUID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerName="registry-server" containerID="cri-o://1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0" gracePeriod=30 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.370449 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lndl"] Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.370715 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" podUID="506a2195-43f9-4a3a-ad03-ad55166c7e03" containerName="marketplace-operator" containerID="cri-o://b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4" gracePeriod=30 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.380743 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxlgd"] Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.381151 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pxlgd" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerName="registry-server" containerID="cri-o://6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86" gracePeriod=30 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.394138 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hfxxz"] Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.395033 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.401775 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqcls"] Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.402039 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qqcls" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerName="registry-server" containerID="cri-o://43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45" gracePeriod=30 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.416951 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hfxxz"] Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.428218 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1695f85-c20b-4708-b4f0-006f3a269301-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hfxxz\" (UID: \"f1695f85-c20b-4708-b4f0-006f3a269301\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.428305 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kw6\" (UniqueName: \"kubernetes.io/projected/f1695f85-c20b-4708-b4f0-006f3a269301-kube-api-access-d8kw6\") pod \"marketplace-operator-79b997595-hfxxz\" (UID: \"f1695f85-c20b-4708-b4f0-006f3a269301\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.428335 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1695f85-c20b-4708-b4f0-006f3a269301-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hfxxz\" (UID: \"f1695f85-c20b-4708-b4f0-006f3a269301\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.529449 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8kw6\" (UniqueName: \"kubernetes.io/projected/f1695f85-c20b-4708-b4f0-006f3a269301-kube-api-access-d8kw6\") pod \"marketplace-operator-79b997595-hfxxz\" (UID: \"f1695f85-c20b-4708-b4f0-006f3a269301\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.529511 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1695f85-c20b-4708-b4f0-006f3a269301-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hfxxz\" (UID: \"f1695f85-c20b-4708-b4f0-006f3a269301\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.529682 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1695f85-c20b-4708-b4f0-006f3a269301-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hfxxz\" (UID: \"f1695f85-c20b-4708-b4f0-006f3a269301\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.530742 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1695f85-c20b-4708-b4f0-006f3a269301-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hfxxz\" (UID: \"f1695f85-c20b-4708-b4f0-006f3a269301\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.545121 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f1695f85-c20b-4708-b4f0-006f3a269301-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hfxxz\" (UID: \"f1695f85-c20b-4708-b4f0-006f3a269301\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.549202 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8kw6\" (UniqueName: \"kubernetes.io/projected/f1695f85-c20b-4708-b4f0-006f3a269301-kube-api-access-d8kw6\") pod \"marketplace-operator-79b997595-hfxxz\" (UID: \"f1695f85-c20b-4708-b4f0-006f3a269301\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.724301 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.769535 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.813514 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.820988 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.834632 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbnfx\" (UniqueName: \"kubernetes.io/projected/c1c15ec0-52c3-4420-9ccf-a50630662516-kube-api-access-tbnfx\") pod \"c1c15ec0-52c3-4420-9ccf-a50630662516\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.834687 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r74x9\" (UniqueName: \"kubernetes.io/projected/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-kube-api-access-r74x9\") pod \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.834740 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgjfv\" (UniqueName: \"kubernetes.io/projected/506a2195-43f9-4a3a-ad03-ad55166c7e03-kube-api-access-vgjfv\") pod \"506a2195-43f9-4a3a-ad03-ad55166c7e03\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.834769 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-trusted-ca\") pod \"506a2195-43f9-4a3a-ad03-ad55166c7e03\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.834787 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-utilities\") pod \"c1c15ec0-52c3-4420-9ccf-a50630662516\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.834828 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-operator-metrics\") pod \"506a2195-43f9-4a3a-ad03-ad55166c7e03\" (UID: \"506a2195-43f9-4a3a-ad03-ad55166c7e03\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.834843 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-utilities\") pod \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.834884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-catalog-content\") pod \"c1c15ec0-52c3-4420-9ccf-a50630662516\" (UID: \"c1c15ec0-52c3-4420-9ccf-a50630662516\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.834919 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-catalog-content\") pod \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.837482 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-utilities" (OuterVolumeSpecName: "utilities") pod "8c3dfd30-55e6-44cf-9657-cff0cc0d2499" (UID: "8c3dfd30-55e6-44cf-9657-cff0cc0d2499"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.837795 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "506a2195-43f9-4a3a-ad03-ad55166c7e03" (UID: "506a2195-43f9-4a3a-ad03-ad55166c7e03"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.838290 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-utilities" (OuterVolumeSpecName: "utilities") pod "c1c15ec0-52c3-4420-9ccf-a50630662516" (UID: "c1c15ec0-52c3-4420-9ccf-a50630662516"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.840093 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c15ec0-52c3-4420-9ccf-a50630662516-kube-api-access-tbnfx" (OuterVolumeSpecName: "kube-api-access-tbnfx") pod "c1c15ec0-52c3-4420-9ccf-a50630662516" (UID: "c1c15ec0-52c3-4420-9ccf-a50630662516"). InnerVolumeSpecName "kube-api-access-tbnfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.842158 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-kube-api-access-r74x9" (OuterVolumeSpecName: "kube-api-access-r74x9") pod "8c3dfd30-55e6-44cf-9657-cff0cc0d2499" (UID: "8c3dfd30-55e6-44cf-9657-cff0cc0d2499"). InnerVolumeSpecName "kube-api-access-r74x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.843010 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506a2195-43f9-4a3a-ad03-ad55166c7e03-kube-api-access-vgjfv" (OuterVolumeSpecName: "kube-api-access-vgjfv") pod "506a2195-43f9-4a3a-ad03-ad55166c7e03" (UID: "506a2195-43f9-4a3a-ad03-ad55166c7e03"). InnerVolumeSpecName "kube-api-access-vgjfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.854759 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "506a2195-43f9-4a3a-ad03-ad55166c7e03" (UID: "506a2195-43f9-4a3a-ad03-ad55166c7e03"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.869241 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1c15ec0-52c3-4420-9ccf-a50630662516" (UID: "c1c15ec0-52c3-4420-9ccf-a50630662516"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.885899 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.935612 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c3dfd30-55e6-44cf-9657-cff0cc0d2499" (UID: "8c3dfd30-55e6-44cf-9657-cff0cc0d2499"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.935625 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-catalog-content\") pod \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.935793 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kjmb\" (UniqueName: \"kubernetes.io/projected/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-kube-api-access-8kjmb\") pod \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.936630 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-catalog-content\") pod \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\" (UID: \"8c3dfd30-55e6-44cf-9657-cff0cc0d2499\") " Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.936687 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-utilities\") pod \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\" (UID: \"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3\") " Nov 25 14:31:07 crc kubenswrapper[4796]: W1125 14:31:07.936763 4796 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8c3dfd30-55e6-44cf-9657-cff0cc0d2499/volumes/kubernetes.io~empty-dir/catalog-content Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.936784 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c3dfd30-55e6-44cf-9657-cff0cc0d2499" (UID: "8c3dfd30-55e6-44cf-9657-cff0cc0d2499"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937345 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgjfv\" (UniqueName: \"kubernetes.io/projected/506a2195-43f9-4a3a-ad03-ad55166c7e03-kube-api-access-vgjfv\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937418 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937432 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937444 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/506a2195-43f9-4a3a-ad03-ad55166c7e03-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937432 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-utilities" (OuterVolumeSpecName: "utilities") pod "d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" (UID: "d44b94b1-15d2-48d6-8ae3-bc9787adc1e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937456 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937502 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1c15ec0-52c3-4420-9ccf-a50630662516-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937513 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937523 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbnfx\" (UniqueName: \"kubernetes.io/projected/c1c15ec0-52c3-4420-9ccf-a50630662516-kube-api-access-tbnfx\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.937533 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r74x9\" (UniqueName: \"kubernetes.io/projected/8c3dfd30-55e6-44cf-9657-cff0cc0d2499-kube-api-access-r74x9\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.939757 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-kube-api-access-8kjmb" (OuterVolumeSpecName: "kube-api-access-8kjmb") pod "d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" (UID: "d44b94b1-15d2-48d6-8ae3-bc9787adc1e3"). InnerVolumeSpecName "kube-api-access-8kjmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.957897 4796 generic.go:334] "Generic (PLEG): container finished" podID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerID="1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0" exitCode=0 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.957959 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsq6m" event={"ID":"8c3dfd30-55e6-44cf-9657-cff0cc0d2499","Type":"ContainerDied","Data":"1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0"} Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.957988 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsq6m" event={"ID":"8c3dfd30-55e6-44cf-9657-cff0cc0d2499","Type":"ContainerDied","Data":"f5b624de4071471761f1dbb64ad3763bbf6304e848aceed503f55fd5e6d28ec6"} Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.958004 4796 scope.go:117] "RemoveContainer" containerID="1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.958111 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsq6m" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.969268 4796 generic.go:334] "Generic (PLEG): container finished" podID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerID="a61be15cab07c22060a5f797a643a2e9c05aca81fa52b9296d15d9e4a8eda6f0" exitCode=0 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.969343 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbltb" event={"ID":"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c","Type":"ContainerDied","Data":"a61be15cab07c22060a5f797a643a2e9c05aca81fa52b9296d15d9e4a8eda6f0"} Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.971657 4796 generic.go:334] "Generic (PLEG): container finished" podID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerID="43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45" exitCode=0 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.972028 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqcls" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.972041 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcls" event={"ID":"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3","Type":"ContainerDied","Data":"43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45"} Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.972644 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqcls" event={"ID":"d44b94b1-15d2-48d6-8ae3-bc9787adc1e3","Type":"ContainerDied","Data":"9e745b33f5ce7ab23a402018f5cd4f8a024ef24d429370bba78064bed44b1937"} Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.975839 4796 generic.go:334] "Generic (PLEG): container finished" podID="506a2195-43f9-4a3a-ad03-ad55166c7e03" containerID="b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4" exitCode=0 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.976238 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.979201 4796 generic.go:334] "Generic (PLEG): container finished" podID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerID="6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86" exitCode=0 Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.979269 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxlgd" Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.977286 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" event={"ID":"506a2195-43f9-4a3a-ad03-ad55166c7e03","Type":"ContainerDied","Data":"b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4"} Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.981915 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lndl" event={"ID":"506a2195-43f9-4a3a-ad03-ad55166c7e03","Type":"ContainerDied","Data":"dd157b2794ec5aeafa677e53162d3ecaebeacc4c57a1f4000c8956b3de649e54"} Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.981955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxlgd" event={"ID":"c1c15ec0-52c3-4420-9ccf-a50630662516","Type":"ContainerDied","Data":"6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86"} Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.981982 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxlgd" event={"ID":"c1c15ec0-52c3-4420-9ccf-a50630662516","Type":"ContainerDied","Data":"3be93ef79eed084357855fe242079c5ae589a4c3fe8dd9c5f82f8c70d070aa91"} Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.994088 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dsq6m"] Nov 25 14:31:07 crc kubenswrapper[4796]: I1125 14:31:07.997146 4796 scope.go:117] "RemoveContainer" containerID="d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.001905 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dsq6m"] Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.032645 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxlgd"] Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.035122 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxlgd"] Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.038620 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.038650 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kjmb\" (UniqueName: \"kubernetes.io/projected/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-kube-api-access-8kjmb\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.046823 4796 scope.go:117] "RemoveContainer" containerID="9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.050723 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" (UID: "d44b94b1-15d2-48d6-8ae3-bc9787adc1e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.055433 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lndl"] Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.058444 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lndl"] Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.064912 4796 scope.go:117] "RemoveContainer" containerID="1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.067034 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0\": container with ID starting with 1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0 not found: ID does not exist" containerID="1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.067079 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0"} err="failed to get container status \"1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0\": rpc error: code = NotFound desc = could not find container \"1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0\": container with ID starting with 1f2d546837c9e30b979c6a1b1a21c3b3da2bc776076282d57158926ac205e1a0 not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.067113 4796 scope.go:117] "RemoveContainer" containerID="d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.067489 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4\": container with ID starting with d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4 not found: ID does not exist" containerID="d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.067582 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4"} err="failed to get container status \"d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4\": rpc error: code = NotFound desc = could not find container \"d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4\": container with ID starting with d7fbe4972e47545a2c4e05c47dfc9d80eade5479ca79432acf10373a2f96fde4 not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.067619 4796 scope.go:117] "RemoveContainer" containerID="9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.067891 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150\": container with ID starting with 9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150 not found: ID does not exist" containerID="9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.067917 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150"} err="failed to get container status \"9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150\": rpc error: code = NotFound desc = could not find container \"9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150\": container with ID starting with 9c61951ad169925738339a9fe5daa5c28f89515b59f6bcb97b31d4e4066d7150 not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.067935 4796 scope.go:117] "RemoveContainer" containerID="43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.082846 4796 scope.go:117] "RemoveContainer" containerID="b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.096446 4796 scope.go:117] "RemoveContainer" containerID="0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.113380 4796 scope.go:117] "RemoveContainer" containerID="43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.113931 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45\": container with ID starting with 43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45 not found: ID does not exist" containerID="43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.113959 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45"} err="failed to get container status \"43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45\": rpc error: code = NotFound desc = could not find container \"43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45\": container with ID starting with 43d5137380587abb12ca1462f4f60671bdea2b9d9c915a45e239b89b1671db45 not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.113981 4796 scope.go:117] "RemoveContainer" containerID="b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.114349 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380\": container with ID starting with b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380 not found: ID does not exist" containerID="b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.114398 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380"} err="failed to get container status \"b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380\": rpc error: code = NotFound desc = could not find container \"b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380\": container with ID starting with b781f02ec71c87636f32db72b5dcf2806d72f60e78aa1bdfed74e5ccc3b6f380 not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.114434 4796 scope.go:117] "RemoveContainer" containerID="0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.114910 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4\": container with ID starting with 0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4 not found: ID does not exist" containerID="0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.114930 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4"} err="failed to get container status \"0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4\": rpc error: code = NotFound desc = could not find container \"0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4\": container with ID starting with 0e84b48b0f827dd23af53e69e4bc9606ff90c759acb681cf28c8ba3eade68cb4 not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.114946 4796 scope.go:117] "RemoveContainer" containerID="b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.131054 4796 scope.go:117] "RemoveContainer" containerID="b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.131434 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4\": container with ID starting with b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4 not found: ID does not exist" containerID="b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.131462 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4"} err="failed to get container status \"b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4\": rpc error: code = NotFound desc = could not find container \"b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4\": container with ID starting with b70144b45e5e3f17b808d9ee9efe4d97515c19da35cb8424881a6d488c1629e4 not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.131482 4796 scope.go:117] "RemoveContainer" containerID="6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.139932 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.151872 4796 scope.go:117] "RemoveContainer" containerID="7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.164823 4796 scope.go:117] "RemoveContainer" containerID="ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.173753 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hfxxz"] Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.178464 4796 scope.go:117] "RemoveContainer" containerID="6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.179047 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86\": container with ID starting with 6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86 not found: ID does not exist" containerID="6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.179076 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86"} err="failed to get container status \"6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86\": rpc error: code = NotFound desc = could not find container \"6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86\": container with ID starting with 6e0ebdafd79a8f6be15e026860dad60b6d3a34adc37d933e5b4ef5db044c6b86 not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.179100 4796 scope.go:117] "RemoveContainer" containerID="7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.179506 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd\": container with ID starting with 7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd not found: ID does not exist" containerID="7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.179527 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd"} err="failed to get container status \"7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd\": rpc error: code = NotFound desc = could not find container \"7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd\": container with ID starting with 7a9e3f58f447f41dc929ad5ee20af3d5253ff5e1584f8a7f754203f56346a7bd not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.179542 4796 scope.go:117] "RemoveContainer" containerID="ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82" Nov 25 14:31:08 crc kubenswrapper[4796]: E1125 14:31:08.179781 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82\": container with ID starting with ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82 not found: ID does not exist" containerID="ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.179801 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82"} err="failed to get container status \"ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82\": rpc error: code = NotFound desc = could not find container \"ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82\": container with ID starting with ef0b7183594486ce00fa56545d344ee4bfeb8f7d70f2099b8c623dc1a27bda82 not found: ID does not exist" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.198717 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.240957 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-utilities\") pod \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.241026 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-catalog-content\") pod \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.241149 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h25xt\" (UniqueName: \"kubernetes.io/projected/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-kube-api-access-h25xt\") pod \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\" (UID: \"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c\") " Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.253525 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-utilities" (OuterVolumeSpecName: "utilities") pod "a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" (UID: "a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.258059 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-kube-api-access-h25xt" (OuterVolumeSpecName: "kube-api-access-h25xt") pod "a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" (UID: "a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c"). InnerVolumeSpecName "kube-api-access-h25xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.287769 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" (UID: "a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.302866 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqcls"] Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.305960 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qqcls"] Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.342801 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h25xt\" (UniqueName: \"kubernetes.io/projected/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-kube-api-access-h25xt\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.343706 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.343728 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.416361 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506a2195-43f9-4a3a-ad03-ad55166c7e03" path="/var/lib/kubelet/pods/506a2195-43f9-4a3a-ad03-ad55166c7e03/volumes" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.417017 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" path="/var/lib/kubelet/pods/8c3dfd30-55e6-44cf-9657-cff0cc0d2499/volumes" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.417751 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" path="/var/lib/kubelet/pods/c1c15ec0-52c3-4420-9ccf-a50630662516/volumes" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.419002 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" path="/var/lib/kubelet/pods/d44b94b1-15d2-48d6-8ae3-bc9787adc1e3/volumes" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.989382 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbltb" event={"ID":"a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c","Type":"ContainerDied","Data":"2131d6ad63dd42fecadf122001e6521cf46f473b0bc5aeba591cc6570009a8d8"} Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.989457 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbltb" Nov 25 14:31:08 crc kubenswrapper[4796]: I1125 14:31:08.989506 4796 scope.go:117] "RemoveContainer" containerID="a61be15cab07c22060a5f797a643a2e9c05aca81fa52b9296d15d9e4a8eda6f0" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.007440 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" event={"ID":"f1695f85-c20b-4708-b4f0-006f3a269301","Type":"ContainerStarted","Data":"810ec06b07ceaaefb4aa82cc6eaa9137a6b0cb84de0bfbf04a2772dbd83c3a4e"} Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.009488 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" event={"ID":"f1695f85-c20b-4708-b4f0-006f3a269301","Type":"ContainerStarted","Data":"8f05ef965956f565c8970fe1bedc52b11ef7cbfba819dc6387390abfc78d755c"} Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.010927 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.016096 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.021647 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbltb"] Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.027012 4796 scope.go:117] "RemoveContainer" containerID="6dbc5efc925c59b157e06b43b0635c5e24d64438af49c3af7696a59055668e3f" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.030310 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bbltb"] Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.070846 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hfxxz" podStartSLOduration=2.070825158 podStartE2EDuration="2.070825158s" podCreationTimestamp="2025-11-25 14:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:31:09.03667941 +0000 UTC m=+397.379788844" watchObservedRunningTime="2025-11-25 14:31:09.070825158 +0000 UTC m=+397.413934582" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.075551 4796 scope.go:117] "RemoveContainer" containerID="3e1a69fee3e741f7b55cc2c6ad3e12d6c9cb3f5653ef75b9f8fa276d3776b903" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.550562 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jbwpd"] Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.550998 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerName="extract-utilities" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551029 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerName="extract-utilities" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551057 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerName="extract-content" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551073 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerName="extract-content" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551096 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerName="extract-content" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551111 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerName="extract-content" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551144 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551156 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551170 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerName="extract-content" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551182 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerName="extract-content" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551199 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerName="extract-utilities" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551210 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerName="extract-utilities" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551229 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551241 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551257 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerName="extract-content" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551271 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerName="extract-content" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551289 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerName="extract-utilities" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551301 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerName="extract-utilities" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551318 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551330 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551344 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506a2195-43f9-4a3a-ad03-ad55166c7e03" containerName="marketplace-operator" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551357 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="506a2195-43f9-4a3a-ad03-ad55166c7e03" containerName="marketplace-operator" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551375 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551387 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: E1125 14:31:09.551558 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerName="extract-utilities" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551601 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerName="extract-utilities" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551890 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c15ec0-52c3-4420-9ccf-a50630662516" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551922 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44b94b1-15d2-48d6-8ae3-bc9787adc1e3" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551944 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="506a2195-43f9-4a3a-ad03-ad55166c7e03" containerName="marketplace-operator" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551974 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.551993 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3dfd30-55e6-44cf-9657-cff0cc0d2499" containerName="registry-server" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.553680 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.555621 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.561398 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbwpd"] Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.662899 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b81a274-2b8a-4f1b-8890-ffa61ef91055-catalog-content\") pod \"redhat-marketplace-jbwpd\" (UID: \"9b81a274-2b8a-4f1b-8890-ffa61ef91055\") " pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.662956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b81a274-2b8a-4f1b-8890-ffa61ef91055-utilities\") pod \"redhat-marketplace-jbwpd\" (UID: \"9b81a274-2b8a-4f1b-8890-ffa61ef91055\") " pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.663065 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8thdd\" (UniqueName: \"kubernetes.io/projected/9b81a274-2b8a-4f1b-8890-ffa61ef91055-kube-api-access-8thdd\") pod \"redhat-marketplace-jbwpd\" (UID: \"9b81a274-2b8a-4f1b-8890-ffa61ef91055\") " pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.742838 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hff74"] Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.743835 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.745623 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.762165 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hff74"] Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.764075 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-utilities\") pod \"community-operators-hff74\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.764120 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b81a274-2b8a-4f1b-8890-ffa61ef91055-catalog-content\") pod \"redhat-marketplace-jbwpd\" (UID: \"9b81a274-2b8a-4f1b-8890-ffa61ef91055\") " pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.764145 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b81a274-2b8a-4f1b-8890-ffa61ef91055-utilities\") pod \"redhat-marketplace-jbwpd\" (UID: \"9b81a274-2b8a-4f1b-8890-ffa61ef91055\") " pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.764184 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbpls\" (UniqueName: \"kubernetes.io/projected/b0b227f5-fd98-48f5-8b0b-4d10096c407b-kube-api-access-gbpls\") pod \"community-operators-hff74\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.764243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8thdd\" (UniqueName: \"kubernetes.io/projected/9b81a274-2b8a-4f1b-8890-ffa61ef91055-kube-api-access-8thdd\") pod \"redhat-marketplace-jbwpd\" (UID: \"9b81a274-2b8a-4f1b-8890-ffa61ef91055\") " pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.764299 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-catalog-content\") pod \"community-operators-hff74\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.764600 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b81a274-2b8a-4f1b-8890-ffa61ef91055-utilities\") pod \"redhat-marketplace-jbwpd\" (UID: \"9b81a274-2b8a-4f1b-8890-ffa61ef91055\") " pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.764811 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b81a274-2b8a-4f1b-8890-ffa61ef91055-catalog-content\") pod \"redhat-marketplace-jbwpd\" (UID: \"9b81a274-2b8a-4f1b-8890-ffa61ef91055\") " pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.786352 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8thdd\" (UniqueName: \"kubernetes.io/projected/9b81a274-2b8a-4f1b-8890-ffa61ef91055-kube-api-access-8thdd\") pod \"redhat-marketplace-jbwpd\" (UID: \"9b81a274-2b8a-4f1b-8890-ffa61ef91055\") " pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.865199 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbpls\" (UniqueName: \"kubernetes.io/projected/b0b227f5-fd98-48f5-8b0b-4d10096c407b-kube-api-access-gbpls\") pod \"community-operators-hff74\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.865289 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-catalog-content\") pod \"community-operators-hff74\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.865314 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-utilities\") pod \"community-operators-hff74\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.865741 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-utilities\") pod \"community-operators-hff74\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.865807 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-catalog-content\") pod \"community-operators-hff74\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.875321 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:09 crc kubenswrapper[4796]: I1125 14:31:09.883211 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbpls\" (UniqueName: \"kubernetes.io/projected/b0b227f5-fd98-48f5-8b0b-4d10096c407b-kube-api-access-gbpls\") pod \"community-operators-hff74\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:10 crc kubenswrapper[4796]: I1125 14:31:10.066190 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:10 crc kubenswrapper[4796]: I1125 14:31:10.291648 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jbwpd"] Nov 25 14:31:10 crc kubenswrapper[4796]: I1125 14:31:10.319750 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hff74"] Nov 25 14:31:10 crc kubenswrapper[4796]: W1125 14:31:10.325756 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0b227f5_fd98_48f5_8b0b_4d10096c407b.slice/crio-71fe692513cd37a20e2e41841828f17799e48ae84f40ec1605deb7a9ac9a5abd WatchSource:0}: Error finding container 71fe692513cd37a20e2e41841828f17799e48ae84f40ec1605deb7a9ac9a5abd: Status 404 returned error can't find the container with id 71fe692513cd37a20e2e41841828f17799e48ae84f40ec1605deb7a9ac9a5abd Nov 25 14:31:10 crc kubenswrapper[4796]: I1125 14:31:10.419002 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c" path="/var/lib/kubelet/pods/a7ffd7a1-3ef7-4226-b3b5-8e1898abe83c/volumes" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.021425 4796 generic.go:334] "Generic (PLEG): container finished" podID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerID="93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740" exitCode=0 Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.021518 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hff74" event={"ID":"b0b227f5-fd98-48f5-8b0b-4d10096c407b","Type":"ContainerDied","Data":"93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740"} Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.021917 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hff74" event={"ID":"b0b227f5-fd98-48f5-8b0b-4d10096c407b","Type":"ContainerStarted","Data":"71fe692513cd37a20e2e41841828f17799e48ae84f40ec1605deb7a9ac9a5abd"} Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.024136 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b81a274-2b8a-4f1b-8890-ffa61ef91055" containerID="0177490ef58025d7daed44af178cdcc931060d2c3d2872a046f9d4cc0a9a37a4" exitCode=0 Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.024221 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbwpd" event={"ID":"9b81a274-2b8a-4f1b-8890-ffa61ef91055","Type":"ContainerDied","Data":"0177490ef58025d7daed44af178cdcc931060d2c3d2872a046f9d4cc0a9a37a4"} Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.024313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbwpd" event={"ID":"9b81a274-2b8a-4f1b-8890-ffa61ef91055","Type":"ContainerStarted","Data":"59ab4b582e5da58b15d5ebf0fe67d9be95b527d19ac499ae05dcd8012cf37a95"} Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.359736 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" podUID="d07e3b8b-d9ae-40f6-901c-1be058824059" containerName="registry" containerID="cri-o://9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8" gracePeriod=30 Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.773255 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.791377 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-trusted-ca\") pod \"d07e3b8b-d9ae-40f6-901c-1be058824059\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.791415 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-bound-sa-token\") pod \"d07e3b8b-d9ae-40f6-901c-1be058824059\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.791742 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-certificates\") pod \"d07e3b8b-d9ae-40f6-901c-1be058824059\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.792357 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d07e3b8b-d9ae-40f6-901c-1be058824059" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.792670 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d07e3b8b-d9ae-40f6-901c-1be058824059" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.792696 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d07e3b8b-d9ae-40f6-901c-1be058824059-ca-trust-extracted\") pod \"d07e3b8b-d9ae-40f6-901c-1be058824059\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.792795 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd5zp\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-kube-api-access-kd5zp\") pod \"d07e3b8b-d9ae-40f6-901c-1be058824059\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.793100 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d07e3b8b-d9ae-40f6-901c-1be058824059-installation-pull-secrets\") pod \"d07e3b8b-d9ae-40f6-901c-1be058824059\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.793296 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d07e3b8b-d9ae-40f6-901c-1be058824059\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.793346 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-tls\") pod \"d07e3b8b-d9ae-40f6-901c-1be058824059\" (UID: \"d07e3b8b-d9ae-40f6-901c-1be058824059\") " Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.793858 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.793892 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.798944 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07e3b8b-d9ae-40f6-901c-1be058824059-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d07e3b8b-d9ae-40f6-901c-1be058824059" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.800956 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d07e3b8b-d9ae-40f6-901c-1be058824059" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.808194 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d07e3b8b-d9ae-40f6-901c-1be058824059-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d07e3b8b-d9ae-40f6-901c-1be058824059" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.808553 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d07e3b8b-d9ae-40f6-901c-1be058824059" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.809287 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-kube-api-access-kd5zp" (OuterVolumeSpecName: "kube-api-access-kd5zp") pod "d07e3b8b-d9ae-40f6-901c-1be058824059" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059"). InnerVolumeSpecName "kube-api-access-kd5zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.810111 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d07e3b8b-d9ae-40f6-901c-1be058824059" (UID: "d07e3b8b-d9ae-40f6-901c-1be058824059"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.895160 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.895191 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.895201 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d07e3b8b-d9ae-40f6-901c-1be058824059-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.895210 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd5zp\" (UniqueName: \"kubernetes.io/projected/d07e3b8b-d9ae-40f6-901c-1be058824059-kube-api-access-kd5zp\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.895221 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d07e3b8b-d9ae-40f6-901c-1be058824059-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.947428 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g98hr"] Nov 25 14:31:11 crc kubenswrapper[4796]: E1125 14:31:11.947769 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07e3b8b-d9ae-40f6-901c-1be058824059" containerName="registry" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.947791 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07e3b8b-d9ae-40f6-901c-1be058824059" containerName="registry" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.947939 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07e3b8b-d9ae-40f6-901c-1be058824059" containerName="registry" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.948833 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.951407 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.957451 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g98hr"] Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.995893 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-catalog-content\") pod \"redhat-operators-g98hr\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.996016 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-utilities\") pod \"redhat-operators-g98hr\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:11 crc kubenswrapper[4796]: I1125 14:31:11.996053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvbs\" (UniqueName: \"kubernetes.io/projected/1fc0642f-5868-4241-a027-a9cd7e401962-kube-api-access-9qvbs\") pod \"redhat-operators-g98hr\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.031506 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b81a274-2b8a-4f1b-8890-ffa61ef91055" containerID="f4b759c10f43df579987e2bd5b3c84583ad2b58d5c438d43812ac43c0d272e7c" exitCode=0 Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.031616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbwpd" event={"ID":"9b81a274-2b8a-4f1b-8890-ffa61ef91055","Type":"ContainerDied","Data":"f4b759c10f43df579987e2bd5b3c84583ad2b58d5c438d43812ac43c0d272e7c"} Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.033501 4796 generic.go:334] "Generic (PLEG): container finished" podID="d07e3b8b-d9ae-40f6-901c-1be058824059" containerID="9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8" exitCode=0 Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.033691 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" event={"ID":"d07e3b8b-d9ae-40f6-901c-1be058824059","Type":"ContainerDied","Data":"9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8"} Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.033731 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" event={"ID":"d07e3b8b-d9ae-40f6-901c-1be058824059","Type":"ContainerDied","Data":"5702379ad0b40fb4da498e3daada175324912f7dbe713f39894a7b94b89efe81"} Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.033777 4796 scope.go:117] "RemoveContainer" containerID="9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.034675 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-95xvf" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.095336 4796 scope.go:117] "RemoveContainer" containerID="9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8" Nov 25 14:31:12 crc kubenswrapper[4796]: E1125 14:31:12.095939 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8\": container with ID starting with 9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8 not found: ID does not exist" containerID="9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.095969 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8"} err="failed to get container status \"9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8\": rpc error: code = NotFound desc = could not find container \"9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8\": container with ID starting with 9fb613b99763a92d12779ce685598f9967319da2d8e64df8eb8f2769acbc46c8 not found: ID does not exist" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.101840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-utilities\") pod \"redhat-operators-g98hr\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.101896 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qvbs\" (UniqueName: \"kubernetes.io/projected/1fc0642f-5868-4241-a027-a9cd7e401962-kube-api-access-9qvbs\") pod \"redhat-operators-g98hr\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.101951 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-catalog-content\") pod \"redhat-operators-g98hr\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.103043 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-utilities\") pod \"redhat-operators-g98hr\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.103502 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-catalog-content\") pod \"redhat-operators-g98hr\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.106628 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-95xvf"] Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.110826 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-95xvf"] Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.134355 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qvbs\" (UniqueName: \"kubernetes.io/projected/1fc0642f-5868-4241-a027-a9cd7e401962-kube-api-access-9qvbs\") pod \"redhat-operators-g98hr\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.143767 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kwqht"] Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.148838 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.152245 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.153324 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwqht"] Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.203321 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c5e697-1e70-4d50-a2ed-f7dba77a5520-utilities\") pod \"certified-operators-kwqht\" (UID: \"60c5e697-1e70-4d50-a2ed-f7dba77a5520\") " pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.203367 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c5e697-1e70-4d50-a2ed-f7dba77a5520-catalog-content\") pod \"certified-operators-kwqht\" (UID: \"60c5e697-1e70-4d50-a2ed-f7dba77a5520\") " pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.203490 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hzwq\" (UniqueName: \"kubernetes.io/projected/60c5e697-1e70-4d50-a2ed-f7dba77a5520-kube-api-access-6hzwq\") pod \"certified-operators-kwqht\" (UID: \"60c5e697-1e70-4d50-a2ed-f7dba77a5520\") " pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.272937 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.304368 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c5e697-1e70-4d50-a2ed-f7dba77a5520-catalog-content\") pod \"certified-operators-kwqht\" (UID: \"60c5e697-1e70-4d50-a2ed-f7dba77a5520\") " pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.304451 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hzwq\" (UniqueName: \"kubernetes.io/projected/60c5e697-1e70-4d50-a2ed-f7dba77a5520-kube-api-access-6hzwq\") pod \"certified-operators-kwqht\" (UID: \"60c5e697-1e70-4d50-a2ed-f7dba77a5520\") " pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.304487 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c5e697-1e70-4d50-a2ed-f7dba77a5520-utilities\") pod \"certified-operators-kwqht\" (UID: \"60c5e697-1e70-4d50-a2ed-f7dba77a5520\") " pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.304875 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c5e697-1e70-4d50-a2ed-f7dba77a5520-utilities\") pod \"certified-operators-kwqht\" (UID: \"60c5e697-1e70-4d50-a2ed-f7dba77a5520\") " pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.305081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c5e697-1e70-4d50-a2ed-f7dba77a5520-catalog-content\") pod \"certified-operators-kwqht\" (UID: \"60c5e697-1e70-4d50-a2ed-f7dba77a5520\") " pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.322862 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hzwq\" (UniqueName: \"kubernetes.io/projected/60c5e697-1e70-4d50-a2ed-f7dba77a5520-kube-api-access-6hzwq\") pod \"certified-operators-kwqht\" (UID: \"60c5e697-1e70-4d50-a2ed-f7dba77a5520\") " pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.424021 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07e3b8b-d9ae-40f6-901c-1be058824059" path="/var/lib/kubelet/pods/d07e3b8b-d9ae-40f6-901c-1be058824059/volumes" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.469772 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.474485 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g98hr"] Nov 25 14:31:12 crc kubenswrapper[4796]: I1125 14:31:12.884641 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwqht"] Nov 25 14:31:13 crc kubenswrapper[4796]: I1125 14:31:13.045087 4796 generic.go:334] "Generic (PLEG): container finished" podID="1fc0642f-5868-4241-a027-a9cd7e401962" containerID="7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3" exitCode=0 Nov 25 14:31:13 crc kubenswrapper[4796]: I1125 14:31:13.045140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g98hr" event={"ID":"1fc0642f-5868-4241-a027-a9cd7e401962","Type":"ContainerDied","Data":"7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3"} Nov 25 14:31:13 crc kubenswrapper[4796]: I1125 14:31:13.045400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g98hr" event={"ID":"1fc0642f-5868-4241-a027-a9cd7e401962","Type":"ContainerStarted","Data":"7fde644b491c90d0ce22f9e77e25fb885fd9ea23ba4e1632703d673df8af5c8b"} Nov 25 14:31:13 crc kubenswrapper[4796]: I1125 14:31:13.063646 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwqht" event={"ID":"60c5e697-1e70-4d50-a2ed-f7dba77a5520","Type":"ContainerStarted","Data":"e46a3532ba73975571aaa775f6fad5b66dcd9d1a8dbc7271218379484cf1e1b9"} Nov 25 14:31:13 crc kubenswrapper[4796]: I1125 14:31:13.063699 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwqht" event={"ID":"60c5e697-1e70-4d50-a2ed-f7dba77a5520","Type":"ContainerStarted","Data":"eb058a269bea39ffb8fbae161f0aa206913dca066c873fb69a161515986a4d61"} Nov 25 14:31:13 crc kubenswrapper[4796]: I1125 14:31:13.068018 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jbwpd" event={"ID":"9b81a274-2b8a-4f1b-8890-ffa61ef91055","Type":"ContainerStarted","Data":"da3acf3cdf02a8f130086ef42a997bd2acff9b3b37eb09dd8a1b2d0e3b3af107"} Nov 25 14:31:13 crc kubenswrapper[4796]: I1125 14:31:13.091211 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jbwpd" podStartSLOduration=2.6235004809999998 podStartE2EDuration="4.091192302s" podCreationTimestamp="2025-11-25 14:31:09 +0000 UTC" firstStartedPulling="2025-11-25 14:31:11.025415905 +0000 UTC m=+399.368525359" lastFinishedPulling="2025-11-25 14:31:12.493107756 +0000 UTC m=+400.836217180" observedRunningTime="2025-11-25 14:31:13.090200501 +0000 UTC m=+401.433309925" watchObservedRunningTime="2025-11-25 14:31:13.091192302 +0000 UTC m=+401.434301716" Nov 25 14:31:14 crc kubenswrapper[4796]: I1125 14:31:14.073150 4796 generic.go:334] "Generic (PLEG): container finished" podID="60c5e697-1e70-4d50-a2ed-f7dba77a5520" containerID="e46a3532ba73975571aaa775f6fad5b66dcd9d1a8dbc7271218379484cf1e1b9" exitCode=0 Nov 25 14:31:14 crc kubenswrapper[4796]: I1125 14:31:14.073537 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwqht" event={"ID":"60c5e697-1e70-4d50-a2ed-f7dba77a5520","Type":"ContainerDied","Data":"e46a3532ba73975571aaa775f6fad5b66dcd9d1a8dbc7271218379484cf1e1b9"} Nov 25 14:31:14 crc kubenswrapper[4796]: I1125 14:31:14.076718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g98hr" event={"ID":"1fc0642f-5868-4241-a027-a9cd7e401962","Type":"ContainerStarted","Data":"cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8"} Nov 25 14:31:15 crc kubenswrapper[4796]: I1125 14:31:15.086654 4796 generic.go:334] "Generic (PLEG): container finished" podID="1fc0642f-5868-4241-a027-a9cd7e401962" containerID="cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8" exitCode=0 Nov 25 14:31:15 crc kubenswrapper[4796]: I1125 14:31:15.086708 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g98hr" event={"ID":"1fc0642f-5868-4241-a027-a9cd7e401962","Type":"ContainerDied","Data":"cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8"} Nov 25 14:31:16 crc kubenswrapper[4796]: I1125 14:31:16.094377 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g98hr" event={"ID":"1fc0642f-5868-4241-a027-a9cd7e401962","Type":"ContainerStarted","Data":"749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9"} Nov 25 14:31:16 crc kubenswrapper[4796]: I1125 14:31:16.099892 4796 generic.go:334] "Generic (PLEG): container finished" podID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerID="bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c" exitCode=0 Nov 25 14:31:16 crc kubenswrapper[4796]: I1125 14:31:16.099949 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hff74" event={"ID":"b0b227f5-fd98-48f5-8b0b-4d10096c407b","Type":"ContainerDied","Data":"bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c"} Nov 25 14:31:16 crc kubenswrapper[4796]: I1125 14:31:16.118453 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g98hr" podStartSLOduration=2.407664359 podStartE2EDuration="5.118432882s" podCreationTimestamp="2025-11-25 14:31:11 +0000 UTC" firstStartedPulling="2025-11-25 14:31:13.046590203 +0000 UTC m=+401.389699637" lastFinishedPulling="2025-11-25 14:31:15.757358726 +0000 UTC m=+404.100468160" observedRunningTime="2025-11-25 14:31:16.117182463 +0000 UTC m=+404.460291917" watchObservedRunningTime="2025-11-25 14:31:16.118432882 +0000 UTC m=+404.461542306" Nov 25 14:31:18 crc kubenswrapper[4796]: I1125 14:31:18.115079 4796 generic.go:334] "Generic (PLEG): container finished" podID="60c5e697-1e70-4d50-a2ed-f7dba77a5520" containerID="1294cca3bdfb96e55674dc9ee02f302db34dc29f09ac7030e2c1006509c97d1c" exitCode=0 Nov 25 14:31:18 crc kubenswrapper[4796]: I1125 14:31:18.115299 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwqht" event={"ID":"60c5e697-1e70-4d50-a2ed-f7dba77a5520","Type":"ContainerDied","Data":"1294cca3bdfb96e55674dc9ee02f302db34dc29f09ac7030e2c1006509c97d1c"} Nov 25 14:31:18 crc kubenswrapper[4796]: I1125 14:31:18.122911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hff74" event={"ID":"b0b227f5-fd98-48f5-8b0b-4d10096c407b","Type":"ContainerStarted","Data":"9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874"} Nov 25 14:31:18 crc kubenswrapper[4796]: I1125 14:31:18.161910 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hff74" podStartSLOduration=3.664650278 podStartE2EDuration="9.161889785s" podCreationTimestamp="2025-11-25 14:31:09 +0000 UTC" firstStartedPulling="2025-11-25 14:31:11.024721713 +0000 UTC m=+399.367831137" lastFinishedPulling="2025-11-25 14:31:16.52196123 +0000 UTC m=+404.865070644" observedRunningTime="2025-11-25 14:31:18.158419128 +0000 UTC m=+406.501528572" watchObservedRunningTime="2025-11-25 14:31:18.161889785 +0000 UTC m=+406.504999219" Nov 25 14:31:19 crc kubenswrapper[4796]: I1125 14:31:19.130562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwqht" event={"ID":"60c5e697-1e70-4d50-a2ed-f7dba77a5520","Type":"ContainerStarted","Data":"bcb660bd49a5661acbc2f6d41a68fa01bc28b33c211670abed8f00f66e373ad9"} Nov 25 14:31:19 crc kubenswrapper[4796]: I1125 14:31:19.151387 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kwqht" podStartSLOduration=3.791006498 podStartE2EDuration="7.151366106s" podCreationTimestamp="2025-11-25 14:31:12 +0000 UTC" firstStartedPulling="2025-11-25 14:31:15.160187318 +0000 UTC m=+403.503296732" lastFinishedPulling="2025-11-25 14:31:18.520546916 +0000 UTC m=+406.863656340" observedRunningTime="2025-11-25 14:31:19.148209519 +0000 UTC m=+407.491318943" watchObservedRunningTime="2025-11-25 14:31:19.151366106 +0000 UTC m=+407.494475530" Nov 25 14:31:19 crc kubenswrapper[4796]: I1125 14:31:19.513706 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:31:19 crc kubenswrapper[4796]: I1125 14:31:19.514178 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:31:19 crc kubenswrapper[4796]: I1125 14:31:19.876329 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:19 crc kubenswrapper[4796]: I1125 14:31:19.876381 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:19 crc kubenswrapper[4796]: I1125 14:31:19.920219 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:20 crc kubenswrapper[4796]: I1125 14:31:20.066667 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:20 crc kubenswrapper[4796]: I1125 14:31:20.066719 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:20 crc kubenswrapper[4796]: I1125 14:31:20.115367 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:20 crc kubenswrapper[4796]: I1125 14:31:20.174205 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jbwpd" Nov 25 14:31:22 crc kubenswrapper[4796]: I1125 14:31:22.273825 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:22 crc kubenswrapper[4796]: I1125 14:31:22.273892 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:22 crc kubenswrapper[4796]: I1125 14:31:22.335325 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:22 crc kubenswrapper[4796]: I1125 14:31:22.471853 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:22 crc kubenswrapper[4796]: I1125 14:31:22.471969 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:22 crc kubenswrapper[4796]: I1125 14:31:22.528820 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:23 crc kubenswrapper[4796]: I1125 14:31:23.206875 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kwqht" Nov 25 14:31:23 crc kubenswrapper[4796]: I1125 14:31:23.212426 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 14:31:30 crc kubenswrapper[4796]: I1125 14:31:30.128467 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hff74" Nov 25 14:31:49 crc kubenswrapper[4796]: I1125 14:31:49.514031 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:31:49 crc kubenswrapper[4796]: I1125 14:31:49.514683 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:31:49 crc kubenswrapper[4796]: I1125 14:31:49.514735 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:31:49 crc kubenswrapper[4796]: I1125 14:31:49.515234 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d416e75f99f56c789cfd2d95656cdac196835b00819700290456ea5dec1fb66"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:31:49 crc kubenswrapper[4796]: I1125 14:31:49.515298 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://6d416e75f99f56c789cfd2d95656cdac196835b00819700290456ea5dec1fb66" gracePeriod=600 Nov 25 14:31:50 crc kubenswrapper[4796]: I1125 14:31:50.344787 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="6d416e75f99f56c789cfd2d95656cdac196835b00819700290456ea5dec1fb66" exitCode=0 Nov 25 14:31:50 crc kubenswrapper[4796]: I1125 14:31:50.344885 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"6d416e75f99f56c789cfd2d95656cdac196835b00819700290456ea5dec1fb66"} Nov 25 14:31:50 crc kubenswrapper[4796]: I1125 14:31:50.345741 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"dc030d59a73583025e9c54fa4553f2524065ee43c0dc58ac32a4d2cfc4a3581d"} Nov 25 14:31:50 crc kubenswrapper[4796]: I1125 14:31:50.345779 4796 scope.go:117] "RemoveContainer" containerID="52ae8d61d0942e0624997f6214aa104a793f603be378ede8e4896846b2f06db4" Nov 25 14:33:49 crc kubenswrapper[4796]: I1125 14:33:49.514168 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:33:49 crc kubenswrapper[4796]: I1125 14:33:49.516446 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:34:19 crc kubenswrapper[4796]: I1125 14:34:19.513801 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:34:19 crc kubenswrapper[4796]: I1125 14:34:19.514640 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:34:49 crc kubenswrapper[4796]: I1125 14:34:49.514193 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:34:49 crc kubenswrapper[4796]: I1125 14:34:49.514792 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:34:49 crc kubenswrapper[4796]: I1125 14:34:49.514839 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:34:49 crc kubenswrapper[4796]: I1125 14:34:49.515413 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc030d59a73583025e9c54fa4553f2524065ee43c0dc58ac32a4d2cfc4a3581d"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:34:49 crc kubenswrapper[4796]: I1125 14:34:49.515480 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://dc030d59a73583025e9c54fa4553f2524065ee43c0dc58ac32a4d2cfc4a3581d" gracePeriod=600 Nov 25 14:34:49 crc kubenswrapper[4796]: I1125 14:34:49.884226 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="dc030d59a73583025e9c54fa4553f2524065ee43c0dc58ac32a4d2cfc4a3581d" exitCode=0 Nov 25 14:34:49 crc kubenswrapper[4796]: I1125 14:34:49.884460 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"dc030d59a73583025e9c54fa4553f2524065ee43c0dc58ac32a4d2cfc4a3581d"} Nov 25 14:34:49 crc kubenswrapper[4796]: I1125 14:34:49.884668 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"51ad5aaaaec69282af8ba8d3ab0515dc687f8d212c22650d81fdfbfdba4b24a5"} Nov 25 14:34:49 crc kubenswrapper[4796]: I1125 14:34:49.884699 4796 scope.go:117] "RemoveContainer" containerID="6d416e75f99f56c789cfd2d95656cdac196835b00819700290456ea5dec1fb66" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.464198 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ttph6"] Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.465416 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ttph6" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.467966 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.471482 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ttph6"] Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.472626 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.472967 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-mzmkd" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.482624 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-qzs2l"] Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.483329 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-qzs2l" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.492913 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-p4l4w" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.498644 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-qzs2l"] Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.501365 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-n7x98"] Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.502108 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.513709 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-n7x98"] Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.513989 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-s67q8" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.604232 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg9bn\" (UniqueName: \"kubernetes.io/projected/67aeab52-9ff0-430d-8e78-0f46f59e1688-kube-api-access-bg9bn\") pod \"cert-manager-cainjector-7f985d654d-ttph6\" (UID: \"67aeab52-9ff0-430d-8e78-0f46f59e1688\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ttph6" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.604298 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6rnb\" (UniqueName: \"kubernetes.io/projected/4b5c4e21-18ed-4eee-a81a-f08cf71498e5-kube-api-access-q6rnb\") pod \"cert-manager-5b446d88c5-qzs2l\" (UID: \"4b5c4e21-18ed-4eee-a81a-f08cf71498e5\") " pod="cert-manager/cert-manager-5b446d88c5-qzs2l" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.604330 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxs8\" (UniqueName: \"kubernetes.io/projected/d7365735-d514-48fd-9113-62a80d791d8b-kube-api-access-ttxs8\") pod \"cert-manager-webhook-5655c58dd6-n7x98\" (UID: \"d7365735-d514-48fd-9113-62a80d791d8b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.705177 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg9bn\" (UniqueName: \"kubernetes.io/projected/67aeab52-9ff0-430d-8e78-0f46f59e1688-kube-api-access-bg9bn\") pod \"cert-manager-cainjector-7f985d654d-ttph6\" (UID: \"67aeab52-9ff0-430d-8e78-0f46f59e1688\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ttph6" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.705246 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6rnb\" (UniqueName: \"kubernetes.io/projected/4b5c4e21-18ed-4eee-a81a-f08cf71498e5-kube-api-access-q6rnb\") pod \"cert-manager-5b446d88c5-qzs2l\" (UID: \"4b5c4e21-18ed-4eee-a81a-f08cf71498e5\") " pod="cert-manager/cert-manager-5b446d88c5-qzs2l" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.705282 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttxs8\" (UniqueName: \"kubernetes.io/projected/d7365735-d514-48fd-9113-62a80d791d8b-kube-api-access-ttxs8\") pod \"cert-manager-webhook-5655c58dd6-n7x98\" (UID: \"d7365735-d514-48fd-9113-62a80d791d8b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.728896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg9bn\" (UniqueName: \"kubernetes.io/projected/67aeab52-9ff0-430d-8e78-0f46f59e1688-kube-api-access-bg9bn\") pod \"cert-manager-cainjector-7f985d654d-ttph6\" (UID: \"67aeab52-9ff0-430d-8e78-0f46f59e1688\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-ttph6" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.731463 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttxs8\" (UniqueName: \"kubernetes.io/projected/d7365735-d514-48fd-9113-62a80d791d8b-kube-api-access-ttxs8\") pod \"cert-manager-webhook-5655c58dd6-n7x98\" (UID: \"d7365735-d514-48fd-9113-62a80d791d8b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.734088 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6rnb\" (UniqueName: \"kubernetes.io/projected/4b5c4e21-18ed-4eee-a81a-f08cf71498e5-kube-api-access-q6rnb\") pod \"cert-manager-5b446d88c5-qzs2l\" (UID: \"4b5c4e21-18ed-4eee-a81a-f08cf71498e5\") " pod="cert-manager/cert-manager-5b446d88c5-qzs2l" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.787420 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-ttph6" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.812488 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-qzs2l" Nov 25 14:36:46 crc kubenswrapper[4796]: I1125 14:36:46.820689 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" Nov 25 14:36:47 crc kubenswrapper[4796]: I1125 14:36:47.047239 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-qzs2l"] Nov 25 14:36:47 crc kubenswrapper[4796]: I1125 14:36:47.053262 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 14:36:47 crc kubenswrapper[4796]: I1125 14:36:47.085554 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-n7x98"] Nov 25 14:36:47 crc kubenswrapper[4796]: I1125 14:36:47.208500 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-ttph6"] Nov 25 14:36:47 crc kubenswrapper[4796]: I1125 14:36:47.625598 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" event={"ID":"d7365735-d514-48fd-9113-62a80d791d8b","Type":"ContainerStarted","Data":"30a4bb173a39763189a360bf2c51b6ea59f105bbb0662c02571b7b48c93776ac"} Nov 25 14:36:47 crc kubenswrapper[4796]: I1125 14:36:47.626617 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ttph6" event={"ID":"67aeab52-9ff0-430d-8e78-0f46f59e1688","Type":"ContainerStarted","Data":"07467e027bbd4ea2244f8015776c91f7d8f2908253bd477b86d5c753862eddd7"} Nov 25 14:36:47 crc kubenswrapper[4796]: I1125 14:36:47.627457 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-qzs2l" event={"ID":"4b5c4e21-18ed-4eee-a81a-f08cf71498e5","Type":"ContainerStarted","Data":"a8c235de9ab4539af397fabb792eb938390f4c8d3688c85dcaf586783d720936"} Nov 25 14:36:49 crc kubenswrapper[4796]: I1125 14:36:49.513544 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:36:49 crc kubenswrapper[4796]: I1125 14:36:49.513939 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:36:51 crc kubenswrapper[4796]: I1125 14:36:51.648733 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-ttph6" event={"ID":"67aeab52-9ff0-430d-8e78-0f46f59e1688","Type":"ContainerStarted","Data":"47ddecc8a86a315286bc21b4436c67c6e82bcce8611e29cb8e3fd758818d7ed3"} Nov 25 14:36:51 crc kubenswrapper[4796]: I1125 14:36:51.650704 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-qzs2l" event={"ID":"4b5c4e21-18ed-4eee-a81a-f08cf71498e5","Type":"ContainerStarted","Data":"fcff11235d031763ce2ca1a97d476653d1bfdaa7ca5beccd950968ca1368452b"} Nov 25 14:36:51 crc kubenswrapper[4796]: I1125 14:36:51.652385 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" event={"ID":"d7365735-d514-48fd-9113-62a80d791d8b","Type":"ContainerStarted","Data":"15be1ca92cc98f12d6fd6a9a8b6f95bc51c4d7d2a42f167a6c8a2c6e6d7441c9"} Nov 25 14:36:51 crc kubenswrapper[4796]: I1125 14:36:51.652596 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" Nov 25 14:36:51 crc kubenswrapper[4796]: I1125 14:36:51.671751 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-ttph6" podStartSLOduration=1.669362083 podStartE2EDuration="5.671726489s" podCreationTimestamp="2025-11-25 14:36:46 +0000 UTC" firstStartedPulling="2025-11-25 14:36:47.216096964 +0000 UTC m=+735.559206388" lastFinishedPulling="2025-11-25 14:36:51.21846137 +0000 UTC m=+739.561570794" observedRunningTime="2025-11-25 14:36:51.669807989 +0000 UTC m=+740.012917453" watchObservedRunningTime="2025-11-25 14:36:51.671726489 +0000 UTC m=+740.014835953" Nov 25 14:36:51 crc kubenswrapper[4796]: I1125 14:36:51.724358 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-qzs2l" podStartSLOduration=1.783166691 podStartE2EDuration="5.72432609s" podCreationTimestamp="2025-11-25 14:36:46 +0000 UTC" firstStartedPulling="2025-11-25 14:36:47.052968776 +0000 UTC m=+735.396078210" lastFinishedPulling="2025-11-25 14:36:50.994128185 +0000 UTC m=+739.337237609" observedRunningTime="2025-11-25 14:36:51.699677365 +0000 UTC m=+740.042786909" watchObservedRunningTime="2025-11-25 14:36:51.72432609 +0000 UTC m=+740.067435554" Nov 25 14:36:51 crc kubenswrapper[4796]: I1125 14:36:51.727411 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" podStartSLOduration=1.828123389 podStartE2EDuration="5.727396285s" podCreationTimestamp="2025-11-25 14:36:46 +0000 UTC" firstStartedPulling="2025-11-25 14:36:47.094518719 +0000 UTC m=+735.437628143" lastFinishedPulling="2025-11-25 14:36:50.993791605 +0000 UTC m=+739.336901039" observedRunningTime="2025-11-25 14:36:51.716249503 +0000 UTC m=+740.059358957" watchObservedRunningTime="2025-11-25 14:36:51.727396285 +0000 UTC m=+740.070505749" Nov 25 14:36:56 crc kubenswrapper[4796]: I1125 14:36:56.826030 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-n7x98" Nov 25 14:36:56 crc kubenswrapper[4796]: I1125 14:36:56.970313 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-22sz8"] Nov 25 14:36:56 crc kubenswrapper[4796]: I1125 14:36:56.973653 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-acl-logging" containerID="cri-o://7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb" gracePeriod=30 Nov 25 14:36:56 crc kubenswrapper[4796]: I1125 14:36:56.973797 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="sbdb" containerID="cri-o://f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68" gracePeriod=30 Nov 25 14:36:56 crc kubenswrapper[4796]: I1125 14:36:56.973825 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="nbdb" containerID="cri-o://f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b" gracePeriod=30 Nov 25 14:36:56 crc kubenswrapper[4796]: I1125 14:36:56.974002 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-controller" containerID="cri-o://babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542" gracePeriod=30 Nov 25 14:36:56 crc kubenswrapper[4796]: I1125 14:36:56.974077 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="northd" containerID="cri-o://0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a" gracePeriod=30 Nov 25 14:36:56 crc kubenswrapper[4796]: I1125 14:36:56.974050 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kube-rbac-proxy-node" containerID="cri-o://213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358" gracePeriod=30 Nov 25 14:36:56 crc kubenswrapper[4796]: I1125 14:36:56.974015 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910" gracePeriod=30 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.010768 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" containerID="cri-o://339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6" gracePeriod=30 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.355907 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/3.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.356480 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/1.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.359156 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.359787 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-controller/0.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.360340 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.431872 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sshtq"] Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432116 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kube-rbac-proxy-node" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432131 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kube-rbac-proxy-node" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432156 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kubecfg-setup" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432163 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kubecfg-setup" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432174 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-acl-logging" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432181 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-acl-logging" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432192 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-acl-logging" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432199 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-acl-logging" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432210 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432217 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432229 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432235 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432246 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="northd" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432253 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="northd" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432264 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432272 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432282 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432289 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432300 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432307 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432314 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="sbdb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432321 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="sbdb" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432330 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="nbdb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432337 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="nbdb" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.432349 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432358 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432956 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="sbdb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.432980 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="nbdb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433009 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-acl-logging" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433029 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433037 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-acl-logging" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433050 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovn-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433066 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433076 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="northd" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433092 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433109 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="kube-rbac-proxy-node" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433124 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.433139 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.447102 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.447184 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.447809 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerName="ovnkube-controller" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.453362 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489609 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-env-overrides\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489681 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-config\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489701 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-slash\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489722 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8srjn\" (UniqueName: \"kubernetes.io/projected/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-kube-api-access-8srjn\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489779 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-script-lib\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489792 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-log-socket\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489811 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-ovn-kubernetes\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489930 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-node-log\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489982 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-slash" (OuterVolumeSpecName: "host-slash") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.489982 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-node-log" (OuterVolumeSpecName: "node-log") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491023 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491100 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-log-socket" (OuterVolumeSpecName: "log-socket") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.490944 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491404 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491624 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-openvswitch\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491670 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-etc-openvswitch\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491702 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491738 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovn-node-metrics-cert\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491785 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-kubelet\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491808 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491810 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-systemd-units\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491845 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491872 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491901 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-netd\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491931 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-ovn\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491967 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-var-lib-openvswitch\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.491998 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-systemd\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492020 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-bin\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492044 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-netns\") pod \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\" (UID: \"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3\") " Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492167 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492199 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-run-ovn\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492227 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492229 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492275 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-cni-netd\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492298 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-var-lib-openvswitch\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492350 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-env-overrides\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-etc-openvswitch\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492434 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-kubelet\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492883 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-ovnkube-config\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.492949 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpd8h\" (UniqueName: \"kubernetes.io/projected/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-kube-api-access-wpd8h\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493173 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-slash\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493250 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-node-log\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493313 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-log-socket\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493353 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493380 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-run-systemd\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493437 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493461 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493769 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493775 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493809 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493844 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.493914 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-ovn-node-metrics-cert\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494037 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-systemd-units\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-run-openvswitch\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494103 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-run-netns\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494136 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-cni-bin\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494175 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-ovnkube-script-lib\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494237 4796 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494256 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494270 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494280 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494288 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494296 4796 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494318 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494328 4796 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494340 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494354 4796 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494367 4796 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494377 4796 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494387 4796 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494395 4796 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494405 4796 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494413 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.494425 4796 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.498411 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-kube-api-access-8srjn" (OuterVolumeSpecName: "kube-api-access-8srjn") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "kube-api-access-8srjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.498599 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.504757 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" (UID: "6eddc136-852e-4cf9-9f8a-e9ec94fc14d3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595355 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-ovn-node-metrics-cert\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595419 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-systemd-units\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595457 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-run-openvswitch\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-run-netns\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595532 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-cni-bin\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595598 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-ovnkube-script-lib\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595634 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595665 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-systemd-units\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595715 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-run-ovn\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595753 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-run-openvswitch\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595681 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-run-ovn\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595791 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-cni-bin\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595857 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595857 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595909 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-run-netns\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595956 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.595967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-cni-netd\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596002 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-cni-netd\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-var-lib-openvswitch\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596089 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-env-overrides\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596129 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-etc-openvswitch\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596152 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-var-lib-openvswitch\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596164 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-kubelet\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-etc-openvswitch\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596194 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-ovnkube-config\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596232 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-kubelet\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596306 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpd8h\" (UniqueName: \"kubernetes.io/projected/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-kube-api-access-wpd8h\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596354 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-slash\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596395 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-node-log\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596429 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-log-socket\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596462 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-run-systemd\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-host-slash\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596504 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-node-log\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596520 4796 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596556 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-run-systemd\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596563 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8srjn\" (UniqueName: \"kubernetes.io/projected/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-kube-api-access-8srjn\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596668 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596623 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-log-socket\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.596867 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-env-overrides\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.597009 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-ovnkube-config\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.597012 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-ovnkube-script-lib\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.599442 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-ovn-node-metrics-cert\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.617779 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpd8h\" (UniqueName: \"kubernetes.io/projected/d06d29c4-0b08-49db-9bd0-c062ad4b56b1-kube-api-access-wpd8h\") pod \"ovnkube-node-sshtq\" (UID: \"d06d29c4-0b08-49db-9bd0-c062ad4b56b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.689472 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ch8mf_7e00ee09-b0b0-4ae8-a51d-cc11fb99679b/kube-multus/2.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.689949 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ch8mf_7e00ee09-b0b0-4ae8-a51d-cc11fb99679b/kube-multus/1.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.690084 4796 generic.go:334] "Generic (PLEG): container finished" podID="7e00ee09-b0b0-4ae8-a51d-cc11fb99679b" containerID="d921bf739f69487a8cd8d927c3aef547d59558cded69656dc16ab6fb56ee5f6e" exitCode=2 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.690138 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ch8mf" event={"ID":"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b","Type":"ContainerDied","Data":"d921bf739f69487a8cd8d927c3aef547d59558cded69656dc16ab6fb56ee5f6e"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.690366 4796 scope.go:117] "RemoveContainer" containerID="3d0e5d28fbb41835a1f2790a85f8d340b3487500a92eb12385db1ff4ce4c85c9" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.690968 4796 scope.go:117] "RemoveContainer" containerID="d921bf739f69487a8cd8d927c3aef547d59558cded69656dc16ab6fb56ee5f6e" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.695043 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovnkube-controller/3.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.697100 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/1.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.700431 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-acl-logging/0.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.700908 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-22sz8_6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/ovn-controller/0.log" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701239 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6" exitCode=0 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701258 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb" exitCode=143 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701268 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68" exitCode=0 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701277 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b" exitCode=0 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701285 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a" exitCode=0 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701293 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910" exitCode=0 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701300 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358" exitCode=0 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701308 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" containerID="babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542" exitCode=143 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701361 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701376 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701388 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701399 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701412 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701426 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701433 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701440 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701447 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701456 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701469 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701477 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701485 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701492 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701499 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701506 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701514 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701520 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701527 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701535 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701542 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701552 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701563 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701586 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701593 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701600 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701608 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701614 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701621 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701628 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701634 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701641 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701648 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701669 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701678 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701685 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701693 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701700 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701706 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701713 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701720 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701727 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701734 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701740 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701751 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" event={"ID":"6eddc136-852e-4cf9-9f8a-e9ec94fc14d3","Type":"ContainerDied","Data":"7a902735743e7e9e812461d34e94610342c5c2ec20371f5a4e9316517b3f1e93"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701762 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701771 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701778 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701785 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701791 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701797 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701803 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701810 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701817 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701823 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701830 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb"} Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.701963 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-22sz8" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.741289 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-22sz8"] Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.742840 4796 scope.go:117] "RemoveContainer" containerID="339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.745227 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-22sz8"] Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.756932 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.773063 4796 scope.go:117] "RemoveContainer" containerID="7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.777321 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.793231 4796 scope.go:117] "RemoveContainer" containerID="f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68" Nov 25 14:36:57 crc kubenswrapper[4796]: W1125 14:36:57.797125 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd06d29c4_0b08_49db_9bd0_c062ad4b56b1.slice/crio-d0c82948f1efca315711f9c4fed9aeb631a50973d4ffc08a622af328f71af2d2 WatchSource:0}: Error finding container d0c82948f1efca315711f9c4fed9aeb631a50973d4ffc08a622af328f71af2d2: Status 404 returned error can't find the container with id d0c82948f1efca315711f9c4fed9aeb631a50973d4ffc08a622af328f71af2d2 Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.811109 4796 scope.go:117] "RemoveContainer" containerID="f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.831983 4796 scope.go:117] "RemoveContainer" containerID="0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.854191 4796 scope.go:117] "RemoveContainer" containerID="b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.867979 4796 scope.go:117] "RemoveContainer" containerID="213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.894550 4796 scope.go:117] "RemoveContainer" containerID="59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.909665 4796 scope.go:117] "RemoveContainer" containerID="babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.933725 4796 scope.go:117] "RemoveContainer" containerID="d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.961818 4796 scope.go:117] "RemoveContainer" containerID="339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.962558 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6\": container with ID starting with 339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6 not found: ID does not exist" containerID="339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.962615 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} err="failed to get container status \"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6\": rpc error: code = NotFound desc = could not find container \"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6\": container with ID starting with 339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.962642 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.962995 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\": container with ID starting with e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c not found: ID does not exist" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.963024 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} err="failed to get container status \"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\": rpc error: code = NotFound desc = could not find container \"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\": container with ID starting with e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.963041 4796 scope.go:117] "RemoveContainer" containerID="7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.964498 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\": container with ID starting with 7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb not found: ID does not exist" containerID="7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.964528 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} err="failed to get container status \"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\": rpc error: code = NotFound desc = could not find container \"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\": container with ID starting with 7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.964557 4796 scope.go:117] "RemoveContainer" containerID="f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.966405 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\": container with ID starting with f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68 not found: ID does not exist" containerID="f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.966438 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} err="failed to get container status \"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\": rpc error: code = NotFound desc = could not find container \"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\": container with ID starting with f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.966458 4796 scope.go:117] "RemoveContainer" containerID="f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.968874 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\": container with ID starting with f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b not found: ID does not exist" containerID="f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.968905 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} err="failed to get container status \"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\": rpc error: code = NotFound desc = could not find container \"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\": container with ID starting with f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.968926 4796 scope.go:117] "RemoveContainer" containerID="0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.969394 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\": container with ID starting with 0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a not found: ID does not exist" containerID="0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.969425 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} err="failed to get container status \"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\": rpc error: code = NotFound desc = could not find container \"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\": container with ID starting with 0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.969444 4796 scope.go:117] "RemoveContainer" containerID="b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.969915 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\": container with ID starting with b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910 not found: ID does not exist" containerID="b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.969947 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} err="failed to get container status \"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\": rpc error: code = NotFound desc = could not find container \"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\": container with ID starting with b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.969966 4796 scope.go:117] "RemoveContainer" containerID="213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.970411 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\": container with ID starting with 213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358 not found: ID does not exist" containerID="213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.970445 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} err="failed to get container status \"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\": rpc error: code = NotFound desc = could not find container \"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\": container with ID starting with 213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.970465 4796 scope.go:117] "RemoveContainer" containerID="59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.970921 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\": container with ID starting with 59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446 not found: ID does not exist" containerID="59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.970952 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446"} err="failed to get container status \"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\": rpc error: code = NotFound desc = could not find container \"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\": container with ID starting with 59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.970969 4796 scope.go:117] "RemoveContainer" containerID="babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.971235 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\": container with ID starting with babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542 not found: ID does not exist" containerID="babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.971267 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} err="failed to get container status \"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\": rpc error: code = NotFound desc = could not find container \"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\": container with ID starting with babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.971288 4796 scope.go:117] "RemoveContainer" containerID="d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb" Nov 25 14:36:57 crc kubenswrapper[4796]: E1125 14:36:57.971635 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\": container with ID starting with d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb not found: ID does not exist" containerID="d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.971661 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb"} err="failed to get container status \"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\": rpc error: code = NotFound desc = could not find container \"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\": container with ID starting with d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.971675 4796 scope.go:117] "RemoveContainer" containerID="339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.971958 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} err="failed to get container status \"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6\": rpc error: code = NotFound desc = could not find container \"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6\": container with ID starting with 339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.971980 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.972699 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} err="failed to get container status \"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\": rpc error: code = NotFound desc = could not find container \"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\": container with ID starting with e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.972722 4796 scope.go:117] "RemoveContainer" containerID="7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.973040 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} err="failed to get container status \"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\": rpc error: code = NotFound desc = could not find container \"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\": container with ID starting with 7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.973062 4796 scope.go:117] "RemoveContainer" containerID="f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.973284 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} err="failed to get container status \"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\": rpc error: code = NotFound desc = could not find container \"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\": container with ID starting with f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.973303 4796 scope.go:117] "RemoveContainer" containerID="f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.973521 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} err="failed to get container status \"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\": rpc error: code = NotFound desc = could not find container \"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\": container with ID starting with f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.973542 4796 scope.go:117] "RemoveContainer" containerID="0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.973803 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} err="failed to get container status \"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\": rpc error: code = NotFound desc = could not find container \"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\": container with ID starting with 0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.973818 4796 scope.go:117] "RemoveContainer" containerID="b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.974027 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} err="failed to get container status \"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\": rpc error: code = NotFound desc = could not find container \"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\": container with ID starting with b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.974043 4796 scope.go:117] "RemoveContainer" containerID="213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.974445 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} err="failed to get container status \"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\": rpc error: code = NotFound desc = could not find container \"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\": container with ID starting with 213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.974596 4796 scope.go:117] "RemoveContainer" containerID="59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.974991 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446"} err="failed to get container status \"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\": rpc error: code = NotFound desc = could not find container \"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\": container with ID starting with 59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.975071 4796 scope.go:117] "RemoveContainer" containerID="babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.975357 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} err="failed to get container status \"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\": rpc error: code = NotFound desc = could not find container \"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\": container with ID starting with babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.975384 4796 scope.go:117] "RemoveContainer" containerID="d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.975683 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb"} err="failed to get container status \"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\": rpc error: code = NotFound desc = could not find container \"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\": container with ID starting with d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.975777 4796 scope.go:117] "RemoveContainer" containerID="339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.976111 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} err="failed to get container status \"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6\": rpc error: code = NotFound desc = could not find container \"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6\": container with ID starting with 339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.976137 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.976380 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} err="failed to get container status \"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\": rpc error: code = NotFound desc = could not find container \"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\": container with ID starting with e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.976449 4796 scope.go:117] "RemoveContainer" containerID="7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.976775 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} err="failed to get container status \"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\": rpc error: code = NotFound desc = could not find container \"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\": container with ID starting with 7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.976794 4796 scope.go:117] "RemoveContainer" containerID="f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.977039 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} err="failed to get container status \"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\": rpc error: code = NotFound desc = could not find container \"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\": container with ID starting with f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.977060 4796 scope.go:117] "RemoveContainer" containerID="f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.977311 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} err="failed to get container status \"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\": rpc error: code = NotFound desc = could not find container \"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\": container with ID starting with f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.977368 4796 scope.go:117] "RemoveContainer" containerID="0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.977671 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} err="failed to get container status \"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\": rpc error: code = NotFound desc = could not find container \"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\": container with ID starting with 0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.977706 4796 scope.go:117] "RemoveContainer" containerID="b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.977919 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910"} err="failed to get container status \"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\": rpc error: code = NotFound desc = could not find container \"b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910\": container with ID starting with b73d41cd8f78bda0c7067626ca35b15031c981e09f8019b650eb5d2c7ceee910 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.977944 4796 scope.go:117] "RemoveContainer" containerID="213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.978158 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358"} err="failed to get container status \"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\": rpc error: code = NotFound desc = could not find container \"213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358\": container with ID starting with 213068edf45295df33a53f52c6413f67da5a46218bc88b405d62a53a58f59358 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.978202 4796 scope.go:117] "RemoveContainer" containerID="59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.978424 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446"} err="failed to get container status \"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\": rpc error: code = NotFound desc = could not find container \"59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446\": container with ID starting with 59c5892da14471cd22c915d0c3a14e022529d15ea9df8f0ee74cdd9f126d9446 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.978444 4796 scope.go:117] "RemoveContainer" containerID="babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.978769 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542"} err="failed to get container status \"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\": rpc error: code = NotFound desc = could not find container \"babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542\": container with ID starting with babb9e96e1941a0019e478fbc6ff5c1c68a7834201a599cb01fbf1b4e2e3e542 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.978794 4796 scope.go:117] "RemoveContainer" containerID="d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.979148 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb"} err="failed to get container status \"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\": rpc error: code = NotFound desc = could not find container \"d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb\": container with ID starting with d842adc7daec99f87126a38255df0e60d744b269f953e69f036f1e1e9866d4fb not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.979180 4796 scope.go:117] "RemoveContainer" containerID="339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.979448 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6"} err="failed to get container status \"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6\": rpc error: code = NotFound desc = could not find container \"339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6\": container with ID starting with 339fbdd23a2bdd48847d0c3d0f7b3759eeb539aa3f44c52d63f4d7c19b3ab7e6 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.979522 4796 scope.go:117] "RemoveContainer" containerID="e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.980784 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c"} err="failed to get container status \"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\": rpc error: code = NotFound desc = could not find container \"e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c\": container with ID starting with e1203945e1b0565329cb7c6d6f33d93c4adf5760e427c9025d9a7e8445ba301c not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.980806 4796 scope.go:117] "RemoveContainer" containerID="7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.981094 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb"} err="failed to get container status \"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\": rpc error: code = NotFound desc = could not find container \"7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb\": container with ID starting with 7482121c923bfe6b3096e8df3ecfb41d1ee12a4e216a544812186d87de5d63cb not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.981119 4796 scope.go:117] "RemoveContainer" containerID="f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.981363 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68"} err="failed to get container status \"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\": rpc error: code = NotFound desc = could not find container \"f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68\": container with ID starting with f455fcaf228c77917a41981fe703bcd28d6407733cb83b785ea94645b4edab68 not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.981435 4796 scope.go:117] "RemoveContainer" containerID="f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.981983 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b"} err="failed to get container status \"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\": rpc error: code = NotFound desc = could not find container \"f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b\": container with ID starting with f1204c47298b69669547219fdd5ee7cb2d1d425a686469ff78bfef8747c7b88b not found: ID does not exist" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.982051 4796 scope.go:117] "RemoveContainer" containerID="0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a" Nov 25 14:36:57 crc kubenswrapper[4796]: I1125 14:36:57.982821 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a"} err="failed to get container status \"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\": rpc error: code = NotFound desc = could not find container \"0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a\": container with ID starting with 0539fd1c2cc461ec1da2331c8029823ac4467d7681d2e73c14994a522ad83e8a not found: ID does not exist" Nov 25 14:36:58 crc kubenswrapper[4796]: I1125 14:36:58.424630 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eddc136-852e-4cf9-9f8a-e9ec94fc14d3" path="/var/lib/kubelet/pods/6eddc136-852e-4cf9-9f8a-e9ec94fc14d3/volumes" Nov 25 14:36:58 crc kubenswrapper[4796]: I1125 14:36:58.715493 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ch8mf_7e00ee09-b0b0-4ae8-a51d-cc11fb99679b/kube-multus/2.log" Nov 25 14:36:58 crc kubenswrapper[4796]: I1125 14:36:58.715740 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ch8mf" event={"ID":"7e00ee09-b0b0-4ae8-a51d-cc11fb99679b","Type":"ContainerStarted","Data":"422f3fbd0605d1d7d8b803f28e699be63f09b801fed91c7e4daf65d289e88f0d"} Nov 25 14:36:58 crc kubenswrapper[4796]: I1125 14:36:58.718879 4796 generic.go:334] "Generic (PLEG): container finished" podID="d06d29c4-0b08-49db-9bd0-c062ad4b56b1" containerID="2ed6014ad46b7a18dbfe96048d47db61e7581a3bd35f376d6d74a89ff5f05ef8" exitCode=0 Nov 25 14:36:58 crc kubenswrapper[4796]: I1125 14:36:58.718936 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerDied","Data":"2ed6014ad46b7a18dbfe96048d47db61e7581a3bd35f376d6d74a89ff5f05ef8"} Nov 25 14:36:58 crc kubenswrapper[4796]: I1125 14:36:58.718972 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerStarted","Data":"d0c82948f1efca315711f9c4fed9aeb631a50973d4ffc08a622af328f71af2d2"} Nov 25 14:36:59 crc kubenswrapper[4796]: I1125 14:36:59.738503 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerStarted","Data":"345aac839018517ffa57ea2056f72e3c2574de1e76dde5306d7ffa6b40d0c126"} Nov 25 14:36:59 crc kubenswrapper[4796]: I1125 14:36:59.738853 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerStarted","Data":"d684058dfcbd2906d874679c7153c79e29389c710139937483a56266ecf82ca7"} Nov 25 14:36:59 crc kubenswrapper[4796]: I1125 14:36:59.738871 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerStarted","Data":"30bb1ba870ef71872cd072ba6a074eed3833fb343860dcb60ea6f043d691a7c6"} Nov 25 14:36:59 crc kubenswrapper[4796]: I1125 14:36:59.738915 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerStarted","Data":"124fc32375b8bdc64e338091fbede600b5a09f791ca353f2f5e78fc8747c4836"} Nov 25 14:37:00 crc kubenswrapper[4796]: I1125 14:37:00.750818 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerStarted","Data":"6090e3ed6cf3c45e63c39b3688be000737381f16e99155d2a3c35f09ad283294"} Nov 25 14:37:00 crc kubenswrapper[4796]: I1125 14:37:00.751319 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerStarted","Data":"71443a11ad83c4e5938104507186fae9a46f855031f15553249ef9031d43fd81"} Nov 25 14:37:02 crc kubenswrapper[4796]: I1125 14:37:02.771771 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerStarted","Data":"b4a16e040eedfde5572db637ac6807fdf585b8fa78e5c94628cbdc992d264345"} Nov 25 14:37:05 crc kubenswrapper[4796]: I1125 14:37:05.806653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" event={"ID":"d06d29c4-0b08-49db-9bd0-c062ad4b56b1","Type":"ContainerStarted","Data":"30a2a7927dce403f3c4bbe8b47c42de8b41fabbbcde458f5450cd111e061de49"} Nov 25 14:37:05 crc kubenswrapper[4796]: I1125 14:37:05.807312 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:37:05 crc kubenswrapper[4796]: I1125 14:37:05.807327 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:37:05 crc kubenswrapper[4796]: I1125 14:37:05.807340 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:37:05 crc kubenswrapper[4796]: I1125 14:37:05.845482 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:37:05 crc kubenswrapper[4796]: I1125 14:37:05.861409 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:37:05 crc kubenswrapper[4796]: I1125 14:37:05.899783 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" podStartSLOduration=8.899761271 podStartE2EDuration="8.899761271s" podCreationTimestamp="2025-11-25 14:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:37:05.861293382 +0000 UTC m=+754.204402816" watchObservedRunningTime="2025-11-25 14:37:05.899761271 +0000 UTC m=+754.242870705" Nov 25 14:37:09 crc kubenswrapper[4796]: I1125 14:37:09.678745 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 14:37:19 crc kubenswrapper[4796]: I1125 14:37:19.514407 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:37:19 crc kubenswrapper[4796]: I1125 14:37:19.515157 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:37:27 crc kubenswrapper[4796]: I1125 14:37:27.800938 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sshtq" Nov 25 14:37:36 crc kubenswrapper[4796]: I1125 14:37:36.968230 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq"] Nov 25 14:37:36 crc kubenswrapper[4796]: I1125 14:37:36.971202 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:36 crc kubenswrapper[4796]: I1125 14:37:36.973874 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 14:37:36 crc kubenswrapper[4796]: I1125 14:37:36.984342 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq"] Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.071826 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wbfv\" (UniqueName: \"kubernetes.io/projected/1fee00b0-68b7-43d4-85a5-d63daf73962d-kube-api-access-6wbfv\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.071963 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.072093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.173782 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.173917 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.174002 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wbfv\" (UniqueName: \"kubernetes.io/projected/1fee00b0-68b7-43d4-85a5-d63daf73962d-kube-api-access-6wbfv\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.175096 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.176015 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.202326 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wbfv\" (UniqueName: \"kubernetes.io/projected/1fee00b0-68b7-43d4-85a5-d63daf73962d-kube-api-access-6wbfv\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.293061 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:37 crc kubenswrapper[4796]: I1125 14:37:37.543143 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq"] Nov 25 14:37:38 crc kubenswrapper[4796]: I1125 14:37:38.026614 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" event={"ID":"1fee00b0-68b7-43d4-85a5-d63daf73962d","Type":"ContainerStarted","Data":"f8330a29670db1e2654cf444ae961ae9614b5e0a9b9fd7fe0463c7568e13b2e5"} Nov 25 14:37:38 crc kubenswrapper[4796]: I1125 14:37:38.026686 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" event={"ID":"1fee00b0-68b7-43d4-85a5-d63daf73962d","Type":"ContainerStarted","Data":"dee32c00de2a43c561c376c2fa6cb258f6f2d6adddf70564a853ae20eed0332c"} Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.033155 4796 generic.go:334] "Generic (PLEG): container finished" podID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerID="f8330a29670db1e2654cf444ae961ae9614b5e0a9b9fd7fe0463c7568e13b2e5" exitCode=0 Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.033258 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" event={"ID":"1fee00b0-68b7-43d4-85a5-d63daf73962d","Type":"ContainerDied","Data":"f8330a29670db1e2654cf444ae961ae9614b5e0a9b9fd7fe0463c7568e13b2e5"} Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.338956 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gpd2j"] Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.340406 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.349482 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpd2j"] Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.410226 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-utilities\") pod \"redhat-operators-gpd2j\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.410274 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-catalog-content\") pod \"redhat-operators-gpd2j\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.410537 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9xql\" (UniqueName: \"kubernetes.io/projected/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-kube-api-access-h9xql\") pod \"redhat-operators-gpd2j\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.512015 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9xql\" (UniqueName: \"kubernetes.io/projected/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-kube-api-access-h9xql\") pod \"redhat-operators-gpd2j\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.512110 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-utilities\") pod \"redhat-operators-gpd2j\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.512133 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-catalog-content\") pod \"redhat-operators-gpd2j\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.512591 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-catalog-content\") pod \"redhat-operators-gpd2j\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.512824 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-utilities\") pod \"redhat-operators-gpd2j\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.543808 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9xql\" (UniqueName: \"kubernetes.io/projected/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-kube-api-access-h9xql\") pod \"redhat-operators-gpd2j\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.668624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:39 crc kubenswrapper[4796]: I1125 14:37:39.885983 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpd2j"] Nov 25 14:37:39 crc kubenswrapper[4796]: W1125 14:37:39.889688 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e1c8444_48ee_49c8_aad4_9de8c9bfa5b9.slice/crio-d5156871af48a5ef6e20a91e3c7e5124dfe6f3d50dd16e50f5db49045b33528e WatchSource:0}: Error finding container d5156871af48a5ef6e20a91e3c7e5124dfe6f3d50dd16e50f5db49045b33528e: Status 404 returned error can't find the container with id d5156871af48a5ef6e20a91e3c7e5124dfe6f3d50dd16e50f5db49045b33528e Nov 25 14:37:40 crc kubenswrapper[4796]: I1125 14:37:40.038905 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpd2j" event={"ID":"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9","Type":"ContainerStarted","Data":"d5156871af48a5ef6e20a91e3c7e5124dfe6f3d50dd16e50f5db49045b33528e"} Nov 25 14:37:41 crc kubenswrapper[4796]: I1125 14:37:41.046893 4796 generic.go:334] "Generic (PLEG): container finished" podID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerID="cdd19181de73f58860e5b4c8d4cd48671d77bb8de9eab0b3146b15e47a8545c9" exitCode=0 Nov 25 14:37:41 crc kubenswrapper[4796]: I1125 14:37:41.047046 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" event={"ID":"1fee00b0-68b7-43d4-85a5-d63daf73962d","Type":"ContainerDied","Data":"cdd19181de73f58860e5b4c8d4cd48671d77bb8de9eab0b3146b15e47a8545c9"} Nov 25 14:37:41 crc kubenswrapper[4796]: I1125 14:37:41.049088 4796 generic.go:334] "Generic (PLEG): container finished" podID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerID="78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61" exitCode=0 Nov 25 14:37:41 crc kubenswrapper[4796]: I1125 14:37:41.049151 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpd2j" event={"ID":"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9","Type":"ContainerDied","Data":"78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61"} Nov 25 14:37:42 crc kubenswrapper[4796]: I1125 14:37:42.057280 4796 generic.go:334] "Generic (PLEG): container finished" podID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerID="53f679a2449170bca152107ea57a909bffc9584efffbdb071cb2f481e2c8c06e" exitCode=0 Nov 25 14:37:42 crc kubenswrapper[4796]: I1125 14:37:42.057341 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" event={"ID":"1fee00b0-68b7-43d4-85a5-d63daf73962d","Type":"ContainerDied","Data":"53f679a2449170bca152107ea57a909bffc9584efffbdb071cb2f481e2c8c06e"} Nov 25 14:37:42 crc kubenswrapper[4796]: I1125 14:37:42.061360 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpd2j" event={"ID":"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9","Type":"ContainerStarted","Data":"857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7"} Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.068068 4796 generic.go:334] "Generic (PLEG): container finished" podID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerID="857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7" exitCode=0 Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.068143 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpd2j" event={"ID":"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9","Type":"ContainerDied","Data":"857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7"} Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.402477 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.467335 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wbfv\" (UniqueName: \"kubernetes.io/projected/1fee00b0-68b7-43d4-85a5-d63daf73962d-kube-api-access-6wbfv\") pod \"1fee00b0-68b7-43d4-85a5-d63daf73962d\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.467765 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-bundle\") pod \"1fee00b0-68b7-43d4-85a5-d63daf73962d\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.467818 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-util\") pod \"1fee00b0-68b7-43d4-85a5-d63daf73962d\" (UID: \"1fee00b0-68b7-43d4-85a5-d63daf73962d\") " Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.468920 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-bundle" (OuterVolumeSpecName: "bundle") pod "1fee00b0-68b7-43d4-85a5-d63daf73962d" (UID: "1fee00b0-68b7-43d4-85a5-d63daf73962d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.473874 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fee00b0-68b7-43d4-85a5-d63daf73962d-kube-api-access-6wbfv" (OuterVolumeSpecName: "kube-api-access-6wbfv") pod "1fee00b0-68b7-43d4-85a5-d63daf73962d" (UID: "1fee00b0-68b7-43d4-85a5-d63daf73962d"). InnerVolumeSpecName "kube-api-access-6wbfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.484248 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-util" (OuterVolumeSpecName: "util") pod "1fee00b0-68b7-43d4-85a5-d63daf73962d" (UID: "1fee00b0-68b7-43d4-85a5-d63daf73962d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.569114 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-util\") on node \"crc\" DevicePath \"\"" Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.569153 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wbfv\" (UniqueName: \"kubernetes.io/projected/1fee00b0-68b7-43d4-85a5-d63daf73962d-kube-api-access-6wbfv\") on node \"crc\" DevicePath \"\"" Nov 25 14:37:43 crc kubenswrapper[4796]: I1125 14:37:43.569168 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1fee00b0-68b7-43d4-85a5-d63daf73962d-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:37:44 crc kubenswrapper[4796]: I1125 14:37:44.076968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpd2j" event={"ID":"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9","Type":"ContainerStarted","Data":"61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed"} Nov 25 14:37:44 crc kubenswrapper[4796]: I1125 14:37:44.079412 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" event={"ID":"1fee00b0-68b7-43d4-85a5-d63daf73962d","Type":"ContainerDied","Data":"dee32c00de2a43c561c376c2fa6cb258f6f2d6adddf70564a853ae20eed0332c"} Nov 25 14:37:44 crc kubenswrapper[4796]: I1125 14:37:44.079451 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dee32c00de2a43c561c376c2fa6cb258f6f2d6adddf70564a853ae20eed0332c" Nov 25 14:37:44 crc kubenswrapper[4796]: I1125 14:37:44.079485 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq" Nov 25 14:37:44 crc kubenswrapper[4796]: I1125 14:37:44.105389 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gpd2j" podStartSLOduration=2.6619064740000002 podStartE2EDuration="5.105368589s" podCreationTimestamp="2025-11-25 14:37:39 +0000 UTC" firstStartedPulling="2025-11-25 14:37:41.052255372 +0000 UTC m=+789.395364846" lastFinishedPulling="2025-11-25 14:37:43.495717537 +0000 UTC m=+791.838826961" observedRunningTime="2025-11-25 14:37:44.101992445 +0000 UTC m=+792.445101869" watchObservedRunningTime="2025-11-25 14:37:44.105368589 +0000 UTC m=+792.448478013" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.278912 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-kcqf5"] Nov 25 14:37:48 crc kubenswrapper[4796]: E1125 14:37:48.279792 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerName="pull" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.279808 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerName="pull" Nov 25 14:37:48 crc kubenswrapper[4796]: E1125 14:37:48.279826 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerName="util" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.279834 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerName="util" Nov 25 14:37:48 crc kubenswrapper[4796]: E1125 14:37:48.279851 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerName="extract" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.279858 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerName="extract" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.279978 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fee00b0-68b7-43d4-85a5-d63daf73962d" containerName="extract" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.280424 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-kcqf5" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.283228 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jxhkv" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.283883 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.284103 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.295439 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-kcqf5"] Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.385249 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwjd\" (UniqueName: \"kubernetes.io/projected/5c8c5a1b-b996-41da-96ab-07156e73016f-kube-api-access-4mwjd\") pod \"nmstate-operator-557fdffb88-kcqf5\" (UID: \"5c8c5a1b-b996-41da-96ab-07156e73016f\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-kcqf5" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.486634 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mwjd\" (UniqueName: \"kubernetes.io/projected/5c8c5a1b-b996-41da-96ab-07156e73016f-kube-api-access-4mwjd\") pod \"nmstate-operator-557fdffb88-kcqf5\" (UID: \"5c8c5a1b-b996-41da-96ab-07156e73016f\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-kcqf5" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.516194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mwjd\" (UniqueName: \"kubernetes.io/projected/5c8c5a1b-b996-41da-96ab-07156e73016f-kube-api-access-4mwjd\") pod \"nmstate-operator-557fdffb88-kcqf5\" (UID: \"5c8c5a1b-b996-41da-96ab-07156e73016f\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-kcqf5" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.594400 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-kcqf5" Nov 25 14:37:48 crc kubenswrapper[4796]: I1125 14:37:48.847962 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-kcqf5"] Nov 25 14:37:49 crc kubenswrapper[4796]: I1125 14:37:49.108187 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-kcqf5" event={"ID":"5c8c5a1b-b996-41da-96ab-07156e73016f","Type":"ContainerStarted","Data":"beb1955fc63e88cf7f82978f8d92f104980daede41fab2ed7d35ddc70f3d777b"} Nov 25 14:37:49 crc kubenswrapper[4796]: I1125 14:37:49.514453 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:37:49 crc kubenswrapper[4796]: I1125 14:37:49.514511 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:37:49 crc kubenswrapper[4796]: I1125 14:37:49.514553 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:37:49 crc kubenswrapper[4796]: I1125 14:37:49.515131 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51ad5aaaaec69282af8ba8d3ab0515dc687f8d212c22650d81fdfbfdba4b24a5"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:37:49 crc kubenswrapper[4796]: I1125 14:37:49.515192 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://51ad5aaaaec69282af8ba8d3ab0515dc687f8d212c22650d81fdfbfdba4b24a5" gracePeriod=600 Nov 25 14:37:49 crc kubenswrapper[4796]: I1125 14:37:49.669145 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:49 crc kubenswrapper[4796]: I1125 14:37:49.669212 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:50 crc kubenswrapper[4796]: I1125 14:37:50.120050 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="51ad5aaaaec69282af8ba8d3ab0515dc687f8d212c22650d81fdfbfdba4b24a5" exitCode=0 Nov 25 14:37:50 crc kubenswrapper[4796]: I1125 14:37:50.120772 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"51ad5aaaaec69282af8ba8d3ab0515dc687f8d212c22650d81fdfbfdba4b24a5"} Nov 25 14:37:50 crc kubenswrapper[4796]: I1125 14:37:50.120805 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"6ee44c58110c9589c1b970cfd7a594ab20931e9987e3c25566f0b8fb802d3fc7"} Nov 25 14:37:50 crc kubenswrapper[4796]: I1125 14:37:50.120827 4796 scope.go:117] "RemoveContainer" containerID="dc030d59a73583025e9c54fa4553f2524065ee43c0dc58ac32a4d2cfc4a3581d" Nov 25 14:37:50 crc kubenswrapper[4796]: I1125 14:37:50.721997 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gpd2j" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerName="registry-server" probeResult="failure" output=< Nov 25 14:37:50 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Nov 25 14:37:50 crc kubenswrapper[4796]: > Nov 25 14:37:51 crc kubenswrapper[4796]: I1125 14:37:51.128523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-kcqf5" event={"ID":"5c8c5a1b-b996-41da-96ab-07156e73016f","Type":"ContainerStarted","Data":"4203b2342d3b2cd2ba567c89982e195cb392c40fee1b067af58e4d6d0f8dc971"} Nov 25 14:37:51 crc kubenswrapper[4796]: I1125 14:37:51.151347 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-kcqf5" podStartSLOduration=1.359576633 podStartE2EDuration="3.151325608s" podCreationTimestamp="2025-11-25 14:37:48 +0000 UTC" firstStartedPulling="2025-11-25 14:37:48.855104495 +0000 UTC m=+797.198213919" lastFinishedPulling="2025-11-25 14:37:50.64685347 +0000 UTC m=+798.989962894" observedRunningTime="2025-11-25 14:37:51.14875505 +0000 UTC m=+799.491864494" watchObservedRunningTime="2025-11-25 14:37:51.151325608 +0000 UTC m=+799.494435032" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.407651 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r"] Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.409423 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.411041 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-rmzmd" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.427243 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r"] Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.435529 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf"] Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.436288 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:37:57 crc kubenswrapper[4796]: W1125 14:37:57.442422 4796 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: secrets "openshift-nmstate-webhook" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Nov 25 14:37:57 crc kubenswrapper[4796]: E1125 14:37:57.442471 4796 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-nmstate-webhook\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.452102 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5whlr"] Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.452990 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.480524 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf"] Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.520733 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzv9\" (UniqueName: \"kubernetes.io/projected/b129a211-721a-412c-95fd-a1c27b7d3092-kube-api-access-8jzv9\") pod \"nmstate-metrics-5dcf9c57c5-z2g7r\" (UID: \"b129a211-721a-412c-95fd-a1c27b7d3092\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.548988 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq"] Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.549634 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.557209 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.557382 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6nmtt" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.565118 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq"] Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.569071 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.621805 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d050fb17-6f98-4899-861e-b180f1587b64-dbus-socket\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.622111 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-2mjnf\" (UID: \"7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.622214 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jzv9\" (UniqueName: \"kubernetes.io/projected/b129a211-721a-412c-95fd-a1c27b7d3092-kube-api-access-8jzv9\") pod \"nmstate-metrics-5dcf9c57c5-z2g7r\" (UID: \"b129a211-721a-412c-95fd-a1c27b7d3092\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.622299 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffrjk\" (UniqueName: \"kubernetes.io/projected/7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67-kube-api-access-ffrjk\") pod \"nmstate-webhook-6b89b748d8-2mjnf\" (UID: \"7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.622417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d050fb17-6f98-4899-861e-b180f1587b64-nmstate-lock\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.622502 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88kzx\" (UniqueName: \"kubernetes.io/projected/d050fb17-6f98-4899-861e-b180f1587b64-kube-api-access-88kzx\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.622600 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d050fb17-6f98-4899-861e-b180f1587b64-ovs-socket\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.662641 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jzv9\" (UniqueName: \"kubernetes.io/projected/b129a211-721a-412c-95fd-a1c27b7d3092-kube-api-access-8jzv9\") pod \"nmstate-metrics-5dcf9c57c5-z2g7r\" (UID: \"b129a211-721a-412c-95fd-a1c27b7d3092\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724198 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebb6d789-f33f-47d5-a8b5-b727a0d54def-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-74nqq\" (UID: \"ebb6d789-f33f-47d5-a8b5-b727a0d54def\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724276 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d050fb17-6f98-4899-861e-b180f1587b64-nmstate-lock\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724327 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d050fb17-6f98-4899-861e-b180f1587b64-nmstate-lock\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724333 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88kzx\" (UniqueName: \"kubernetes.io/projected/d050fb17-6f98-4899-861e-b180f1587b64-kube-api-access-88kzx\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724380 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d050fb17-6f98-4899-861e-b180f1587b64-ovs-socket\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724412 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d050fb17-6f98-4899-861e-b180f1587b64-dbus-socket\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724434 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-2mjnf\" (UID: \"7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724455 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j76n9\" (UniqueName: \"kubernetes.io/projected/ebb6d789-f33f-47d5-a8b5-b727a0d54def-kube-api-access-j76n9\") pod \"nmstate-console-plugin-5874bd7bc5-74nqq\" (UID: \"ebb6d789-f33f-47d5-a8b5-b727a0d54def\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724475 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebb6d789-f33f-47d5-a8b5-b727a0d54def-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-74nqq\" (UID: \"ebb6d789-f33f-47d5-a8b5-b727a0d54def\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffrjk\" (UniqueName: \"kubernetes.io/projected/7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67-kube-api-access-ffrjk\") pod \"nmstate-webhook-6b89b748d8-2mjnf\" (UID: \"7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724775 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d050fb17-6f98-4899-861e-b180f1587b64-ovs-socket\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.724820 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.725011 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d050fb17-6f98-4899-861e-b180f1587b64-dbus-socket\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.755166 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88kzx\" (UniqueName: \"kubernetes.io/projected/d050fb17-6f98-4899-861e-b180f1587b64-kube-api-access-88kzx\") pod \"nmstate-handler-5whlr\" (UID: \"d050fb17-6f98-4899-861e-b180f1587b64\") " pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.755209 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffrjk\" (UniqueName: \"kubernetes.io/projected/7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67-kube-api-access-ffrjk\") pod \"nmstate-webhook-6b89b748d8-2mjnf\" (UID: \"7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.766415 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7699df-7nxfh"] Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.767059 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.776016 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.823622 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7699df-7nxfh"] Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.825234 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j76n9\" (UniqueName: \"kubernetes.io/projected/ebb6d789-f33f-47d5-a8b5-b727a0d54def-kube-api-access-j76n9\") pod \"nmstate-console-plugin-5874bd7bc5-74nqq\" (UID: \"ebb6d789-f33f-47d5-a8b5-b727a0d54def\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.825275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebb6d789-f33f-47d5-a8b5-b727a0d54def-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-74nqq\" (UID: \"ebb6d789-f33f-47d5-a8b5-b727a0d54def\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.825316 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebb6d789-f33f-47d5-a8b5-b727a0d54def-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-74nqq\" (UID: \"ebb6d789-f33f-47d5-a8b5-b727a0d54def\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.826331 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ebb6d789-f33f-47d5-a8b5-b727a0d54def-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-74nqq\" (UID: \"ebb6d789-f33f-47d5-a8b5-b727a0d54def\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.828654 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebb6d789-f33f-47d5-a8b5-b727a0d54def-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-74nqq\" (UID: \"ebb6d789-f33f-47d5-a8b5-b727a0d54def\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.849295 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j76n9\" (UniqueName: \"kubernetes.io/projected/ebb6d789-f33f-47d5-a8b5-b727a0d54def-kube-api-access-j76n9\") pod \"nmstate-console-plugin-5874bd7bc5-74nqq\" (UID: \"ebb6d789-f33f-47d5-a8b5-b727a0d54def\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.862937 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.925998 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-oauth-serving-cert\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.926333 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a4b81fa-0672-4731-a416-b8a403ae2333-console-oauth-config\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.926394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22d25\" (UniqueName: \"kubernetes.io/projected/7a4b81fa-0672-4731-a416-b8a403ae2333-kube-api-access-22d25\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.926419 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4b81fa-0672-4731-a416-b8a403ae2333-console-serving-cert\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.926441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-service-ca\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.926467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-console-config\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.926490 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-trusted-ca-bundle\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:57 crc kubenswrapper[4796]: I1125 14:37:57.966071 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r"] Nov 25 14:37:57 crc kubenswrapper[4796]: W1125 14:37:57.976074 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb129a211_721a_412c_95fd_a1c27b7d3092.slice/crio-c521585bc3d80aae90332d46bcb44ad531f64eeff62119760a026a307588b923 WatchSource:0}: Error finding container c521585bc3d80aae90332d46bcb44ad531f64eeff62119760a026a307588b923: Status 404 returned error can't find the container with id c521585bc3d80aae90332d46bcb44ad531f64eeff62119760a026a307588b923 Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.027250 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-oauth-serving-cert\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.027308 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a4b81fa-0672-4731-a416-b8a403ae2333-console-oauth-config\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.027332 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22d25\" (UniqueName: \"kubernetes.io/projected/7a4b81fa-0672-4731-a416-b8a403ae2333-kube-api-access-22d25\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.027356 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4b81fa-0672-4731-a416-b8a403ae2333-console-serving-cert\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.027377 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-service-ca\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.027407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-console-config\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.027430 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-trusted-ca-bundle\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.028755 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-trusted-ca-bundle\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.029266 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-service-ca\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.029362 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-console-config\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.032835 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7a4b81fa-0672-4731-a416-b8a403ae2333-console-oauth-config\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.033217 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7a4b81fa-0672-4731-a416-b8a403ae2333-oauth-serving-cert\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.033261 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7a4b81fa-0672-4731-a416-b8a403ae2333-console-serving-cert\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.048265 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq"] Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.049972 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22d25\" (UniqueName: \"kubernetes.io/projected/7a4b81fa-0672-4731-a416-b8a403ae2333-kube-api-access-22d25\") pod \"console-f9d7699df-7nxfh\" (UID: \"7a4b81fa-0672-4731-a416-b8a403ae2333\") " pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: W1125 14:37:58.054513 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb6d789_f33f_47d5_a8b5_b727a0d54def.slice/crio-c82122995aec2a3490f3d72af43e296b6af6fbb53c76721dbc3c749e86aaa6a8 WatchSource:0}: Error finding container c82122995aec2a3490f3d72af43e296b6af6fbb53c76721dbc3c749e86aaa6a8: Status 404 returned error can't find the container with id c82122995aec2a3490f3d72af43e296b6af6fbb53c76721dbc3c749e86aaa6a8 Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.143568 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.183670 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r" event={"ID":"b129a211-721a-412c-95fd-a1c27b7d3092","Type":"ContainerStarted","Data":"c521585bc3d80aae90332d46bcb44ad531f64eeff62119760a026a307588b923"} Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.185467 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" event={"ID":"ebb6d789-f33f-47d5-a8b5-b727a0d54def","Type":"ContainerStarted","Data":"c82122995aec2a3490f3d72af43e296b6af6fbb53c76721dbc3c749e86aaa6a8"} Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.186557 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5whlr" event={"ID":"d050fb17-6f98-4899-861e-b180f1587b64","Type":"ContainerStarted","Data":"d46a3eab0b599b5069a7a7f44d7ab8dc079d5c5fe4e04a565c3b639fe0fecf9b"} Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.606634 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7699df-7nxfh"] Nov 25 14:37:58 crc kubenswrapper[4796]: E1125 14:37:58.725834 4796 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: failed to sync secret cache: timed out waiting for the condition Nov 25 14:37:58 crc kubenswrapper[4796]: E1125 14:37:58.726280 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67-tls-key-pair podName:7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67 nodeName:}" failed. No retries permitted until 2025-11-25 14:37:59.226245407 +0000 UTC m=+807.569354881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67-tls-key-pair") pod "nmstate-webhook-6b89b748d8-2mjnf" (UID: "7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:37:58 crc kubenswrapper[4796]: I1125 14:37:58.882958 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.194166 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7699df-7nxfh" event={"ID":"7a4b81fa-0672-4731-a416-b8a403ae2333","Type":"ContainerStarted","Data":"58b1594af7fc8ff93381863304d706ae2923a6162c408540b2f8356b76d42514"} Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.194231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7699df-7nxfh" event={"ID":"7a4b81fa-0672-4731-a416-b8a403ae2333","Type":"ContainerStarted","Data":"9901145000b41dd8cad8693aad7a57e925a2c1ddba1c5bd0d666d178f95c4e81"} Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.246553 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-2mjnf\" (UID: \"7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.256777 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-2mjnf\" (UID: \"7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.557279 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.718456 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.725267 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7699df-7nxfh" podStartSLOduration=2.725251589 podStartE2EDuration="2.725251589s" podCreationTimestamp="2025-11-25 14:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:37:59.231437058 +0000 UTC m=+807.574546522" watchObservedRunningTime="2025-11-25 14:37:59.725251589 +0000 UTC m=+808.068361013" Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.728643 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf"] Nov 25 14:37:59 crc kubenswrapper[4796]: W1125 14:37:59.739780 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bcb5530_fd67_4fc7_96c1_dfdb9dd8ad67.slice/crio-3a1dfe7b5479268a5cb8db4e90cfd6f1e56bbd71e502c898153553ad575fc6a2 WatchSource:0}: Error finding container 3a1dfe7b5479268a5cb8db4e90cfd6f1e56bbd71e502c898153553ad575fc6a2: Status 404 returned error can't find the container with id 3a1dfe7b5479268a5cb8db4e90cfd6f1e56bbd71e502c898153553ad575fc6a2 Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.763766 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:37:59 crc kubenswrapper[4796]: I1125 14:37:59.947901 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpd2j"] Nov 25 14:38:00 crc kubenswrapper[4796]: I1125 14:38:00.200865 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" event={"ID":"7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67","Type":"ContainerStarted","Data":"3a1dfe7b5479268a5cb8db4e90cfd6f1e56bbd71e502c898153553ad575fc6a2"} Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.210409 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r" event={"ID":"b129a211-721a-412c-95fd-a1c27b7d3092","Type":"ContainerStarted","Data":"0ead1d795400a2f111e1a5cc1246ad2ca2e7884e651b147a3e2d586e35b13606"} Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.212025 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" event={"ID":"ebb6d789-f33f-47d5-a8b5-b727a0d54def","Type":"ContainerStarted","Data":"f43c984296df0fc790dbc04d3c63df9caa49ccb68fd6ba2ae91e596516e7ac80"} Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.212931 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gpd2j" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerName="registry-server" containerID="cri-o://61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed" gracePeriod=2 Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.212990 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5whlr" event={"ID":"d050fb17-6f98-4899-861e-b180f1587b64","Type":"ContainerStarted","Data":"20bd3b5c7e3bc86770585b05159aaaf58b8900678a5d8653a558c4b465da8f25"} Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.213240 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.235384 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-74nqq" podStartSLOduration=1.406433196 podStartE2EDuration="4.235368774s" podCreationTimestamp="2025-11-25 14:37:57 +0000 UTC" firstStartedPulling="2025-11-25 14:37:58.056621857 +0000 UTC m=+806.399731281" lastFinishedPulling="2025-11-25 14:38:00.885557435 +0000 UTC m=+809.228666859" observedRunningTime="2025-11-25 14:38:01.230552417 +0000 UTC m=+809.573661841" watchObservedRunningTime="2025-11-25 14:38:01.235368774 +0000 UTC m=+809.578478198" Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.251362 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5whlr" podStartSLOduration=1.193612305 podStartE2EDuration="4.251346733s" podCreationTimestamp="2025-11-25 14:37:57 +0000 UTC" firstStartedPulling="2025-11-25 14:37:57.830513059 +0000 UTC m=+806.173622483" lastFinishedPulling="2025-11-25 14:38:00.888247487 +0000 UTC m=+809.231356911" observedRunningTime="2025-11-25 14:38:01.248192647 +0000 UTC m=+809.591302091" watchObservedRunningTime="2025-11-25 14:38:01.251346733 +0000 UTC m=+809.594456157" Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.540244 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.683699 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-catalog-content\") pod \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.683792 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-utilities\") pod \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.683853 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9xql\" (UniqueName: \"kubernetes.io/projected/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-kube-api-access-h9xql\") pod \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\" (UID: \"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9\") " Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.685114 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-utilities" (OuterVolumeSpecName: "utilities") pod "3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" (UID: "3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.690416 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-kube-api-access-h9xql" (OuterVolumeSpecName: "kube-api-access-h9xql") pod "3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" (UID: "3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9"). InnerVolumeSpecName "kube-api-access-h9xql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.779219 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" (UID: "3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.784885 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9xql\" (UniqueName: \"kubernetes.io/projected/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-kube-api-access-h9xql\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.784971 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:01 crc kubenswrapper[4796]: I1125 14:38:01.785046 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.219447 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" event={"ID":"7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67","Type":"ContainerStarted","Data":"67e156d8d07d1eb5143e9d03502fd66434e5bb8e310caa67ef2b5e130d3a7411"} Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.220381 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.223147 4796 generic.go:334] "Generic (PLEG): container finished" podID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerID="61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed" exitCode=0 Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.223198 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpd2j" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.223212 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpd2j" event={"ID":"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9","Type":"ContainerDied","Data":"61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed"} Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.224545 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpd2j" event={"ID":"3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9","Type":"ContainerDied","Data":"d5156871af48a5ef6e20a91e3c7e5124dfe6f3d50dd16e50f5db49045b33528e"} Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.224591 4796 scope.go:117] "RemoveContainer" containerID="61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.241199 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" podStartSLOduration=3.7570583170000003 podStartE2EDuration="5.241178455s" podCreationTimestamp="2025-11-25 14:37:57 +0000 UTC" firstStartedPulling="2025-11-25 14:37:59.742085695 +0000 UTC m=+808.085195119" lastFinishedPulling="2025-11-25 14:38:01.226205833 +0000 UTC m=+809.569315257" observedRunningTime="2025-11-25 14:38:02.239142992 +0000 UTC m=+810.582252416" watchObservedRunningTime="2025-11-25 14:38:02.241178455 +0000 UTC m=+810.584287879" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.256793 4796 scope.go:117] "RemoveContainer" containerID="857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.266456 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpd2j"] Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.272197 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gpd2j"] Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.284732 4796 scope.go:117] "RemoveContainer" containerID="78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.312340 4796 scope.go:117] "RemoveContainer" containerID="61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed" Nov 25 14:38:02 crc kubenswrapper[4796]: E1125 14:38:02.312963 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed\": container with ID starting with 61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed not found: ID does not exist" containerID="61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.313024 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed"} err="failed to get container status \"61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed\": rpc error: code = NotFound desc = could not find container \"61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed\": container with ID starting with 61475b43e26c33d3f9196427d63902b2b74f6a52136cecbe54d7a0078bd345ed not found: ID does not exist" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.313056 4796 scope.go:117] "RemoveContainer" containerID="857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7" Nov 25 14:38:02 crc kubenswrapper[4796]: E1125 14:38:02.313412 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7\": container with ID starting with 857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7 not found: ID does not exist" containerID="857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.313534 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7"} err="failed to get container status \"857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7\": rpc error: code = NotFound desc = could not find container \"857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7\": container with ID starting with 857b8ed18ef9fb075f9cc57c62e115ef52f334ea0c9cc3b515e35d766733ced7 not found: ID does not exist" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.313589 4796 scope.go:117] "RemoveContainer" containerID="78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61" Nov 25 14:38:02 crc kubenswrapper[4796]: E1125 14:38:02.314071 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61\": container with ID starting with 78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61 not found: ID does not exist" containerID="78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.314109 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61"} err="failed to get container status \"78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61\": rpc error: code = NotFound desc = could not find container \"78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61\": container with ID starting with 78c0c24af9242c7823224807320dde3d8cec5e1b7cecb02675b5a1b47d9bce61 not found: ID does not exist" Nov 25 14:38:02 crc kubenswrapper[4796]: I1125 14:38:02.419187 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" path="/var/lib/kubelet/pods/3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9/volumes" Nov 25 14:38:04 crc kubenswrapper[4796]: I1125 14:38:04.245947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r" event={"ID":"b129a211-721a-412c-95fd-a1c27b7d3092","Type":"ContainerStarted","Data":"48a3691038cee33b312163fd76ae569b96861a275dae520d0424480325d98199"} Nov 25 14:38:04 crc kubenswrapper[4796]: I1125 14:38:04.273455 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-z2g7r" podStartSLOduration=1.936748169 podStartE2EDuration="7.27342724s" podCreationTimestamp="2025-11-25 14:37:57 +0000 UTC" firstStartedPulling="2025-11-25 14:37:57.979829735 +0000 UTC m=+806.322939159" lastFinishedPulling="2025-11-25 14:38:03.316508806 +0000 UTC m=+811.659618230" observedRunningTime="2025-11-25 14:38:04.266759145 +0000 UTC m=+812.609868609" watchObservedRunningTime="2025-11-25 14:38:04.27342724 +0000 UTC m=+812.616536704" Nov 25 14:38:07 crc kubenswrapper[4796]: I1125 14:38:07.817014 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5whlr" Nov 25 14:38:08 crc kubenswrapper[4796]: I1125 14:38:08.144193 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:38:08 crc kubenswrapper[4796]: I1125 14:38:08.144246 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:38:08 crc kubenswrapper[4796]: I1125 14:38:08.153519 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:38:08 crc kubenswrapper[4796]: I1125 14:38:08.278424 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7699df-7nxfh" Nov 25 14:38:08 crc kubenswrapper[4796]: I1125 14:38:08.358371 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x57qm"] Nov 25 14:38:19 crc kubenswrapper[4796]: I1125 14:38:19.565830 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-2mjnf" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.567863 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb"] Nov 25 14:38:31 crc kubenswrapper[4796]: E1125 14:38:31.568817 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerName="registry-server" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.568833 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerName="registry-server" Nov 25 14:38:31 crc kubenswrapper[4796]: E1125 14:38:31.568844 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerName="extract-content" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.568852 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerName="extract-content" Nov 25 14:38:31 crc kubenswrapper[4796]: E1125 14:38:31.568869 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerName="extract-utilities" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.568877 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerName="extract-utilities" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.569028 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1c8444-48ee-49c8-aad4-9de8c9bfa5b9" containerName="registry-server" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.569914 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.572209 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.578835 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb"] Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.636216 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.636274 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbkbq\" (UniqueName: \"kubernetes.io/projected/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-kube-api-access-qbkbq\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.636317 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.737427 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbkbq\" (UniqueName: \"kubernetes.io/projected/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-kube-api-access-qbkbq\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.737688 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.737751 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.738161 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.738176 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.771634 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbkbq\" (UniqueName: \"kubernetes.io/projected/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-kube-api-access-qbkbq\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:31 crc kubenswrapper[4796]: I1125 14:38:31.884885 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:32 crc kubenswrapper[4796]: I1125 14:38:32.342768 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb"] Nov 25 14:38:32 crc kubenswrapper[4796]: I1125 14:38:32.443590 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" event={"ID":"655b2cd8-b6a5-4ab4-848d-908496b6bcc8","Type":"ContainerStarted","Data":"ea723b8360c87f73e3e95017d9692f6150ed7813141405025edffc6f1b96dc6d"} Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.431223 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-x57qm" podUID="fa025925-c61e-49ae-ba50-79f4a401a20f" containerName="console" containerID="cri-o://7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b" gracePeriod=15 Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.454645 4796 generic.go:334] "Generic (PLEG): container finished" podID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerID="f1528ac6109b160bd6e1f2a8629ae8be64cef2c08b391862775904931902fe9a" exitCode=0 Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.454785 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" event={"ID":"655b2cd8-b6a5-4ab4-848d-908496b6bcc8","Type":"ContainerDied","Data":"f1528ac6109b160bd6e1f2a8629ae8be64cef2c08b391862775904931902fe9a"} Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.838427 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x57qm_fa025925-c61e-49ae-ba50-79f4a401a20f/console/0.log" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.838505 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.868506 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-trusted-ca-bundle\") pod \"fa025925-c61e-49ae-ba50-79f4a401a20f\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.868558 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-console-config\") pod \"fa025925-c61e-49ae-ba50-79f4a401a20f\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.868680 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-service-ca\") pod \"fa025925-c61e-49ae-ba50-79f4a401a20f\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.868733 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-oauth-serving-cert\") pod \"fa025925-c61e-49ae-ba50-79f4a401a20f\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.868772 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-oauth-config\") pod \"fa025925-c61e-49ae-ba50-79f4a401a20f\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.868807 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-serving-cert\") pod \"fa025925-c61e-49ae-ba50-79f4a401a20f\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.868859 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knsbb\" (UniqueName: \"kubernetes.io/projected/fa025925-c61e-49ae-ba50-79f4a401a20f-kube-api-access-knsbb\") pod \"fa025925-c61e-49ae-ba50-79f4a401a20f\" (UID: \"fa025925-c61e-49ae-ba50-79f4a401a20f\") " Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.869213 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-console-config" (OuterVolumeSpecName: "console-config") pod "fa025925-c61e-49ae-ba50-79f4a401a20f" (UID: "fa025925-c61e-49ae-ba50-79f4a401a20f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.869230 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-service-ca" (OuterVolumeSpecName: "service-ca") pod "fa025925-c61e-49ae-ba50-79f4a401a20f" (UID: "fa025925-c61e-49ae-ba50-79f4a401a20f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.869249 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fa025925-c61e-49ae-ba50-79f4a401a20f" (UID: "fa025925-c61e-49ae-ba50-79f4a401a20f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.869655 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fa025925-c61e-49ae-ba50-79f4a401a20f" (UID: "fa025925-c61e-49ae-ba50-79f4a401a20f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.874289 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fa025925-c61e-49ae-ba50-79f4a401a20f" (UID: "fa025925-c61e-49ae-ba50-79f4a401a20f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.875497 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fa025925-c61e-49ae-ba50-79f4a401a20f" (UID: "fa025925-c61e-49ae-ba50-79f4a401a20f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.877411 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa025925-c61e-49ae-ba50-79f4a401a20f-kube-api-access-knsbb" (OuterVolumeSpecName: "kube-api-access-knsbb") pod "fa025925-c61e-49ae-ba50-79f4a401a20f" (UID: "fa025925-c61e-49ae-ba50-79f4a401a20f"). InnerVolumeSpecName "kube-api-access-knsbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.970330 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.970632 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.970647 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.970657 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa025925-c61e-49ae-ba50-79f4a401a20f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.970668 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knsbb\" (UniqueName: \"kubernetes.io/projected/fa025925-c61e-49ae-ba50-79f4a401a20f-kube-api-access-knsbb\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.970679 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:33 crc kubenswrapper[4796]: I1125 14:38:33.970688 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa025925-c61e-49ae-ba50-79f4a401a20f-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.471198 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x57qm_fa025925-c61e-49ae-ba50-79f4a401a20f/console/0.log" Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.471275 4796 generic.go:334] "Generic (PLEG): container finished" podID="fa025925-c61e-49ae-ba50-79f4a401a20f" containerID="7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b" exitCode=2 Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.471315 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x57qm" event={"ID":"fa025925-c61e-49ae-ba50-79f4a401a20f","Type":"ContainerDied","Data":"7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b"} Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.471355 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x57qm" event={"ID":"fa025925-c61e-49ae-ba50-79f4a401a20f","Type":"ContainerDied","Data":"ffebb67b6d763ec271de5766b70e9385a908588deb94c928286ce46dc7f830ba"} Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.471383 4796 scope.go:117] "RemoveContainer" containerID="7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b" Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.471536 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x57qm" Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.501555 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x57qm"] Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.506634 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-x57qm"] Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.555140 4796 scope.go:117] "RemoveContainer" containerID="7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b" Nov 25 14:38:34 crc kubenswrapper[4796]: E1125 14:38:34.555598 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b\": container with ID starting with 7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b not found: ID does not exist" containerID="7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b" Nov 25 14:38:34 crc kubenswrapper[4796]: I1125 14:38:34.555647 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b"} err="failed to get container status \"7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b\": rpc error: code = NotFound desc = could not find container \"7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b\": container with ID starting with 7f29b11936f11ad934b084c682396767f18fe881b08905b803a6480e405ef20b not found: ID does not exist" Nov 25 14:38:35 crc kubenswrapper[4796]: I1125 14:38:35.479320 4796 generic.go:334] "Generic (PLEG): container finished" podID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerID="976fd6a97e4dcdd174857a6414565ccd9d34d5fe3c81640a806b6c1e42271baa" exitCode=0 Nov 25 14:38:35 crc kubenswrapper[4796]: I1125 14:38:35.479378 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" event={"ID":"655b2cd8-b6a5-4ab4-848d-908496b6bcc8","Type":"ContainerDied","Data":"976fd6a97e4dcdd174857a6414565ccd9d34d5fe3c81640a806b6c1e42271baa"} Nov 25 14:38:36 crc kubenswrapper[4796]: I1125 14:38:36.416816 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa025925-c61e-49ae-ba50-79f4a401a20f" path="/var/lib/kubelet/pods/fa025925-c61e-49ae-ba50-79f4a401a20f/volumes" Nov 25 14:38:36 crc kubenswrapper[4796]: I1125 14:38:36.494647 4796 generic.go:334] "Generic (PLEG): container finished" podID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerID="ac11d0aeb1f028d16567f361c942ad395438b00aafd7e9fd8c50b81bc0f3e529" exitCode=0 Nov 25 14:38:36 crc kubenswrapper[4796]: I1125 14:38:36.494767 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" event={"ID":"655b2cd8-b6a5-4ab4-848d-908496b6bcc8","Type":"ContainerDied","Data":"ac11d0aeb1f028d16567f361c942ad395438b00aafd7e9fd8c50b81bc0f3e529"} Nov 25 14:38:37 crc kubenswrapper[4796]: I1125 14:38:37.735687 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:37 crc kubenswrapper[4796]: I1125 14:38:37.928840 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-bundle\") pod \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " Nov 25 14:38:37 crc kubenswrapper[4796]: I1125 14:38:37.928921 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-util\") pod \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " Nov 25 14:38:37 crc kubenswrapper[4796]: I1125 14:38:37.928974 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbkbq\" (UniqueName: \"kubernetes.io/projected/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-kube-api-access-qbkbq\") pod \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\" (UID: \"655b2cd8-b6a5-4ab4-848d-908496b6bcc8\") " Nov 25 14:38:37 crc kubenswrapper[4796]: I1125 14:38:37.940210 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-kube-api-access-qbkbq" (OuterVolumeSpecName: "kube-api-access-qbkbq") pod "655b2cd8-b6a5-4ab4-848d-908496b6bcc8" (UID: "655b2cd8-b6a5-4ab4-848d-908496b6bcc8"). InnerVolumeSpecName "kube-api-access-qbkbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:38:37 crc kubenswrapper[4796]: I1125 14:38:37.944633 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-bundle" (OuterVolumeSpecName: "bundle") pod "655b2cd8-b6a5-4ab4-848d-908496b6bcc8" (UID: "655b2cd8-b6a5-4ab4-848d-908496b6bcc8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:38:38 crc kubenswrapper[4796]: I1125 14:38:38.030692 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbkbq\" (UniqueName: \"kubernetes.io/projected/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-kube-api-access-qbkbq\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:38 crc kubenswrapper[4796]: I1125 14:38:38.030763 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:38 crc kubenswrapper[4796]: I1125 14:38:38.204553 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-util" (OuterVolumeSpecName: "util") pod "655b2cd8-b6a5-4ab4-848d-908496b6bcc8" (UID: "655b2cd8-b6a5-4ab4-848d-908496b6bcc8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:38:38 crc kubenswrapper[4796]: I1125 14:38:38.236025 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/655b2cd8-b6a5-4ab4-848d-908496b6bcc8-util\") on node \"crc\" DevicePath \"\"" Nov 25 14:38:38 crc kubenswrapper[4796]: I1125 14:38:38.513344 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" event={"ID":"655b2cd8-b6a5-4ab4-848d-908496b6bcc8","Type":"ContainerDied","Data":"ea723b8360c87f73e3e95017d9692f6150ed7813141405025edffc6f1b96dc6d"} Nov 25 14:38:38 crc kubenswrapper[4796]: I1125 14:38:38.513871 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea723b8360c87f73e3e95017d9692f6150ed7813141405025edffc6f1b96dc6d" Nov 25 14:38:38 crc kubenswrapper[4796]: I1125 14:38:38.513476 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.430208 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x"] Nov 25 14:38:47 crc kubenswrapper[4796]: E1125 14:38:47.432075 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerName="util" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.432170 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerName="util" Nov 25 14:38:47 crc kubenswrapper[4796]: E1125 14:38:47.432253 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerName="extract" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.432399 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerName="extract" Nov 25 14:38:47 crc kubenswrapper[4796]: E1125 14:38:47.432507 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa025925-c61e-49ae-ba50-79f4a401a20f" containerName="console" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.432573 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa025925-c61e-49ae-ba50-79f4a401a20f" containerName="console" Nov 25 14:38:47 crc kubenswrapper[4796]: E1125 14:38:47.432671 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerName="pull" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.432740 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerName="pull" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.432945 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa025925-c61e-49ae-ba50-79f4a401a20f" containerName="console" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.433034 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="655b2cd8-b6a5-4ab4-848d-908496b6bcc8" containerName="extract" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.433629 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.436265 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.436664 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.437436 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.437481 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.437648 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-d8qxs" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.452740 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x"] Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.550651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g54k4\" (UniqueName: \"kubernetes.io/projected/5f701779-96c6-4764-b207-88847114d7c8-kube-api-access-g54k4\") pod \"metallb-operator-controller-manager-68786bb9d9-qc95x\" (UID: \"5f701779-96c6-4764-b207-88847114d7c8\") " pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.550796 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f701779-96c6-4764-b207-88847114d7c8-webhook-cert\") pod \"metallb-operator-controller-manager-68786bb9d9-qc95x\" (UID: \"5f701779-96c6-4764-b207-88847114d7c8\") " pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.550864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f701779-96c6-4764-b207-88847114d7c8-apiservice-cert\") pod \"metallb-operator-controller-manager-68786bb9d9-qc95x\" (UID: \"5f701779-96c6-4764-b207-88847114d7c8\") " pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.652468 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f701779-96c6-4764-b207-88847114d7c8-webhook-cert\") pod \"metallb-operator-controller-manager-68786bb9d9-qc95x\" (UID: \"5f701779-96c6-4764-b207-88847114d7c8\") " pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.652519 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f701779-96c6-4764-b207-88847114d7c8-apiservice-cert\") pod \"metallb-operator-controller-manager-68786bb9d9-qc95x\" (UID: \"5f701779-96c6-4764-b207-88847114d7c8\") " pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.652557 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g54k4\" (UniqueName: \"kubernetes.io/projected/5f701779-96c6-4764-b207-88847114d7c8-kube-api-access-g54k4\") pod \"metallb-operator-controller-manager-68786bb9d9-qc95x\" (UID: \"5f701779-96c6-4764-b207-88847114d7c8\") " pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.658886 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f701779-96c6-4764-b207-88847114d7c8-apiservice-cert\") pod \"metallb-operator-controller-manager-68786bb9d9-qc95x\" (UID: \"5f701779-96c6-4764-b207-88847114d7c8\") " pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.672326 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g54k4\" (UniqueName: \"kubernetes.io/projected/5f701779-96c6-4764-b207-88847114d7c8-kube-api-access-g54k4\") pod \"metallb-operator-controller-manager-68786bb9d9-qc95x\" (UID: \"5f701779-96c6-4764-b207-88847114d7c8\") " pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.675314 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f701779-96c6-4764-b207-88847114d7c8-webhook-cert\") pod \"metallb-operator-controller-manager-68786bb9d9-qc95x\" (UID: \"5f701779-96c6-4764-b207-88847114d7c8\") " pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.750624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.959122 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-778544677-4pg8n"] Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.959833 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.962479 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.962526 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xvk6d" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.962486 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 14:38:47 crc kubenswrapper[4796]: I1125 14:38:47.981360 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-778544677-4pg8n"] Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.154278 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x"] Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.159092 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsfck\" (UniqueName: \"kubernetes.io/projected/5a58cf97-35a8-4201-91b5-c03fce0361b8-kube-api-access-zsfck\") pod \"metallb-operator-webhook-server-778544677-4pg8n\" (UID: \"5a58cf97-35a8-4201-91b5-c03fce0361b8\") " pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.159133 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a58cf97-35a8-4201-91b5-c03fce0361b8-webhook-cert\") pod \"metallb-operator-webhook-server-778544677-4pg8n\" (UID: \"5a58cf97-35a8-4201-91b5-c03fce0361b8\") " pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.159160 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a58cf97-35a8-4201-91b5-c03fce0361b8-apiservice-cert\") pod \"metallb-operator-webhook-server-778544677-4pg8n\" (UID: \"5a58cf97-35a8-4201-91b5-c03fce0361b8\") " pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: W1125 14:38:48.161107 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f701779_96c6_4764_b207_88847114d7c8.slice/crio-ee95e45468d19492b6bd5188d06d9bc5e4cc01edcb778f72a938b92d3d95611f WatchSource:0}: Error finding container ee95e45468d19492b6bd5188d06d9bc5e4cc01edcb778f72a938b92d3d95611f: Status 404 returned error can't find the container with id ee95e45468d19492b6bd5188d06d9bc5e4cc01edcb778f72a938b92d3d95611f Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.260706 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsfck\" (UniqueName: \"kubernetes.io/projected/5a58cf97-35a8-4201-91b5-c03fce0361b8-kube-api-access-zsfck\") pod \"metallb-operator-webhook-server-778544677-4pg8n\" (UID: \"5a58cf97-35a8-4201-91b5-c03fce0361b8\") " pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.260763 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a58cf97-35a8-4201-91b5-c03fce0361b8-webhook-cert\") pod \"metallb-operator-webhook-server-778544677-4pg8n\" (UID: \"5a58cf97-35a8-4201-91b5-c03fce0361b8\") " pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.260794 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a58cf97-35a8-4201-91b5-c03fce0361b8-apiservice-cert\") pod \"metallb-operator-webhook-server-778544677-4pg8n\" (UID: \"5a58cf97-35a8-4201-91b5-c03fce0361b8\") " pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.265346 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a58cf97-35a8-4201-91b5-c03fce0361b8-webhook-cert\") pod \"metallb-operator-webhook-server-778544677-4pg8n\" (UID: \"5a58cf97-35a8-4201-91b5-c03fce0361b8\") " pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.266466 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a58cf97-35a8-4201-91b5-c03fce0361b8-apiservice-cert\") pod \"metallb-operator-webhook-server-778544677-4pg8n\" (UID: \"5a58cf97-35a8-4201-91b5-c03fce0361b8\") " pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.276917 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsfck\" (UniqueName: \"kubernetes.io/projected/5a58cf97-35a8-4201-91b5-c03fce0361b8-kube-api-access-zsfck\") pod \"metallb-operator-webhook-server-778544677-4pg8n\" (UID: \"5a58cf97-35a8-4201-91b5-c03fce0361b8\") " pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.288881 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.570231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" event={"ID":"5f701779-96c6-4764-b207-88847114d7c8","Type":"ContainerStarted","Data":"ee95e45468d19492b6bd5188d06d9bc5e4cc01edcb778f72a938b92d3d95611f"} Nov 25 14:38:48 crc kubenswrapper[4796]: I1125 14:38:48.726288 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-778544677-4pg8n"] Nov 25 14:38:48 crc kubenswrapper[4796]: W1125 14:38:48.733717 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a58cf97_35a8_4201_91b5_c03fce0361b8.slice/crio-9abdcbbccf064e84f54986c940d228e9bc0eacf18653dc7f1cea963d851f27a6 WatchSource:0}: Error finding container 9abdcbbccf064e84f54986c940d228e9bc0eacf18653dc7f1cea963d851f27a6: Status 404 returned error can't find the container with id 9abdcbbccf064e84f54986c940d228e9bc0eacf18653dc7f1cea963d851f27a6 Nov 25 14:38:49 crc kubenswrapper[4796]: I1125 14:38:49.594824 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" event={"ID":"5a58cf97-35a8-4201-91b5-c03fce0361b8","Type":"ContainerStarted","Data":"9abdcbbccf064e84f54986c940d228e9bc0eacf18653dc7f1cea963d851f27a6"} Nov 25 14:38:51 crc kubenswrapper[4796]: I1125 14:38:51.614307 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" event={"ID":"5f701779-96c6-4764-b207-88847114d7c8","Type":"ContainerStarted","Data":"eb8fe12eb808f3a9d2c28a7e2b733fa5e94a268fe9373344b284678006faf361"} Nov 25 14:38:51 crc kubenswrapper[4796]: I1125 14:38:51.614856 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:38:51 crc kubenswrapper[4796]: I1125 14:38:51.648947 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" podStartSLOduration=1.948861934 podStartE2EDuration="4.648858811s" podCreationTimestamp="2025-11-25 14:38:47 +0000 UTC" firstStartedPulling="2025-11-25 14:38:48.163666936 +0000 UTC m=+856.506776350" lastFinishedPulling="2025-11-25 14:38:50.863663813 +0000 UTC m=+859.206773227" observedRunningTime="2025-11-25 14:38:51.645500737 +0000 UTC m=+859.988610171" watchObservedRunningTime="2025-11-25 14:38:51.648858811 +0000 UTC m=+859.991968235" Nov 25 14:38:53 crc kubenswrapper[4796]: I1125 14:38:53.625623 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" event={"ID":"5a58cf97-35a8-4201-91b5-c03fce0361b8","Type":"ContainerStarted","Data":"05c613ef58fb1873810ef24f5228dac7f20ab54b6e17e7902ee49f6277807f57"} Nov 25 14:38:53 crc kubenswrapper[4796]: I1125 14:38:53.626023 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:38:53 crc kubenswrapper[4796]: I1125 14:38:53.650504 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" podStartSLOduration=2.724550119 podStartE2EDuration="6.65047018s" podCreationTimestamp="2025-11-25 14:38:47 +0000 UTC" firstStartedPulling="2025-11-25 14:38:48.736149393 +0000 UTC m=+857.079258807" lastFinishedPulling="2025-11-25 14:38:52.662069444 +0000 UTC m=+861.005178868" observedRunningTime="2025-11-25 14:38:53.64432924 +0000 UTC m=+861.987438744" watchObservedRunningTime="2025-11-25 14:38:53.65047018 +0000 UTC m=+861.993579664" Nov 25 14:39:08 crc kubenswrapper[4796]: I1125 14:39:08.297308 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-778544677-4pg8n" Nov 25 14:39:27 crc kubenswrapper[4796]: I1125 14:39:27.754812 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-68786bb9d9-qc95x" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.494119 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk"] Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.495103 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.498888 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-vhclt"] Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.499068 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.499750 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ms67p" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.510835 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk"] Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.510974 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.513176 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.515843 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.575644 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kq8m7"] Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.577099 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.578675 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.578719 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.580266 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.580544 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4v8z6" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.589010 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-zr8xl"] Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.590107 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.596787 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.599665 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79869a5f-b9a3-46e0-bac7-9ff9ac72b16c-cert\") pod \"frr-k8s-webhook-server-6998585d5-zk9xk\" (UID: \"79869a5f-b9a3-46e0-bac7-9ff9ac72b16c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.599908 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79fd\" (UniqueName: \"kubernetes.io/projected/79869a5f-b9a3-46e0-bac7-9ff9ac72b16c-kube-api-access-s79fd\") pod \"frr-k8s-webhook-server-6998585d5-zk9xk\" (UID: \"79869a5f-b9a3-46e0-bac7-9ff9ac72b16c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.601320 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-zr8xl"] Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.701544 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhwt\" (UniqueName: \"kubernetes.io/projected/4fc70054-d9cd-4545-b9e7-d6665887e94d-kube-api-access-hwhwt\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.701634 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-metrics-certs\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.701684 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s79fd\" (UniqueName: \"kubernetes.io/projected/79869a5f-b9a3-46e0-bac7-9ff9ac72b16c-kube-api-access-s79fd\") pod \"frr-k8s-webhook-server-6998585d5-zk9xk\" (UID: \"79869a5f-b9a3-46e0-bac7-9ff9ac72b16c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.701733 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w888g\" (UniqueName: \"kubernetes.io/projected/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-kube-api-access-w888g\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.701773 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-memberlist\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.701807 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgd5t\" (UniqueName: \"kubernetes.io/projected/1979dccd-b017-42f5-9fe1-8717af3f948a-kube-api-access-qgd5t\") pod \"controller-6c7b4b5f48-zr8xl\" (UID: \"1979dccd-b017-42f5-9fe1-8717af3f948a\") " pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.701844 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1979dccd-b017-42f5-9fe1-8717af3f948a-metrics-certs\") pod \"controller-6c7b4b5f48-zr8xl\" (UID: \"1979dccd-b017-42f5-9fe1-8717af3f948a\") " pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.702035 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fc70054-d9cd-4545-b9e7-d6665887e94d-metrics-certs\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.702096 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79869a5f-b9a3-46e0-bac7-9ff9ac72b16c-cert\") pod \"frr-k8s-webhook-server-6998585d5-zk9xk\" (UID: \"79869a5f-b9a3-46e0-bac7-9ff9ac72b16c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.702136 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-frr-conf\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.702187 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-reloader\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.702237 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-metallb-excludel2\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.702272 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4fc70054-d9cd-4545-b9e7-d6665887e94d-frr-startup\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.702350 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1979dccd-b017-42f5-9fe1-8717af3f948a-cert\") pod \"controller-6c7b4b5f48-zr8xl\" (UID: \"1979dccd-b017-42f5-9fe1-8717af3f948a\") " pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.702425 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-frr-sockets\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.702510 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-metrics\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: E1125 14:39:28.702620 4796 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 25 14:39:28 crc kubenswrapper[4796]: E1125 14:39:28.702777 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79869a5f-b9a3-46e0-bac7-9ff9ac72b16c-cert podName:79869a5f-b9a3-46e0-bac7-9ff9ac72b16c nodeName:}" failed. No retries permitted until 2025-11-25 14:39:29.202755713 +0000 UTC m=+897.545865127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/79869a5f-b9a3-46e0-bac7-9ff9ac72b16c-cert") pod "frr-k8s-webhook-server-6998585d5-zk9xk" (UID: "79869a5f-b9a3-46e0-bac7-9ff9ac72b16c") : secret "frr-k8s-webhook-server-cert" not found Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.722445 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s79fd\" (UniqueName: \"kubernetes.io/projected/79869a5f-b9a3-46e0-bac7-9ff9ac72b16c-kube-api-access-s79fd\") pod \"frr-k8s-webhook-server-6998585d5-zk9xk\" (UID: \"79869a5f-b9a3-46e0-bac7-9ff9ac72b16c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.802968 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fc70054-d9cd-4545-b9e7-d6665887e94d-metrics-certs\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803346 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-frr-conf\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803378 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-reloader\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-metallb-excludel2\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803426 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4fc70054-d9cd-4545-b9e7-d6665887e94d-frr-startup\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803458 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1979dccd-b017-42f5-9fe1-8717af3f948a-cert\") pod \"controller-6c7b4b5f48-zr8xl\" (UID: \"1979dccd-b017-42f5-9fe1-8717af3f948a\") " pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803487 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-frr-sockets\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-metrics\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803541 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhwt\" (UniqueName: \"kubernetes.io/projected/4fc70054-d9cd-4545-b9e7-d6665887e94d-kube-api-access-hwhwt\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803560 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-metrics-certs\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803605 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w888g\" (UniqueName: \"kubernetes.io/projected/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-kube-api-access-w888g\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803631 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-memberlist\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803656 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgd5t\" (UniqueName: \"kubernetes.io/projected/1979dccd-b017-42f5-9fe1-8717af3f948a-kube-api-access-qgd5t\") pod \"controller-6c7b4b5f48-zr8xl\" (UID: \"1979dccd-b017-42f5-9fe1-8717af3f948a\") " pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.803681 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1979dccd-b017-42f5-9fe1-8717af3f948a-metrics-certs\") pod \"controller-6c7b4b5f48-zr8xl\" (UID: \"1979dccd-b017-42f5-9fe1-8717af3f948a\") " pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: E1125 14:39:28.804493 4796 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 25 14:39:28 crc kubenswrapper[4796]: E1125 14:39:28.804537 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 14:39:28 crc kubenswrapper[4796]: E1125 14:39:28.804584 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-metrics-certs podName:7f037a6b-9e7f-401d-b4db-98132fb0f9b2 nodeName:}" failed. No retries permitted until 2025-11-25 14:39:29.304536927 +0000 UTC m=+897.647646351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-metrics-certs") pod "speaker-kq8m7" (UID: "7f037a6b-9e7f-401d-b4db-98132fb0f9b2") : secret "speaker-certs-secret" not found Nov 25 14:39:28 crc kubenswrapper[4796]: E1125 14:39:28.804617 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-memberlist podName:7f037a6b-9e7f-401d-b4db-98132fb0f9b2 nodeName:}" failed. No retries permitted until 2025-11-25 14:39:29.304596719 +0000 UTC m=+897.647706223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-memberlist") pod "speaker-kq8m7" (UID: "7f037a6b-9e7f-401d-b4db-98132fb0f9b2") : secret "metallb-memberlist" not found Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.804753 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-metallb-excludel2\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.805256 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-metrics\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.805317 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4fc70054-d9cd-4545-b9e7-d6665887e94d-frr-startup\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.806523 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-frr-conf\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.806559 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-frr-sockets\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.806680 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4fc70054-d9cd-4545-b9e7-d6665887e94d-reloader\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.806843 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fc70054-d9cd-4545-b9e7-d6665887e94d-metrics-certs\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.807332 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1979dccd-b017-42f5-9fe1-8717af3f948a-metrics-certs\") pod \"controller-6c7b4b5f48-zr8xl\" (UID: \"1979dccd-b017-42f5-9fe1-8717af3f948a\") " pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.808446 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1979dccd-b017-42f5-9fe1-8717af3f948a-cert\") pod \"controller-6c7b4b5f48-zr8xl\" (UID: \"1979dccd-b017-42f5-9fe1-8717af3f948a\") " pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.820960 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhwt\" (UniqueName: \"kubernetes.io/projected/4fc70054-d9cd-4545-b9e7-d6665887e94d-kube-api-access-hwhwt\") pod \"frr-k8s-vhclt\" (UID: \"4fc70054-d9cd-4545-b9e7-d6665887e94d\") " pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.832167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w888g\" (UniqueName: \"kubernetes.io/projected/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-kube-api-access-w888g\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.833546 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.848716 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgd5t\" (UniqueName: \"kubernetes.io/projected/1979dccd-b017-42f5-9fe1-8717af3f948a-kube-api-access-qgd5t\") pod \"controller-6c7b4b5f48-zr8xl\" (UID: \"1979dccd-b017-42f5-9fe1-8717af3f948a\") " pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:28 crc kubenswrapper[4796]: I1125 14:39:28.906419 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.207244 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79869a5f-b9a3-46e0-bac7-9ff9ac72b16c-cert\") pod \"frr-k8s-webhook-server-6998585d5-zk9xk\" (UID: \"79869a5f-b9a3-46e0-bac7-9ff9ac72b16c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.212410 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79869a5f-b9a3-46e0-bac7-9ff9ac72b16c-cert\") pod \"frr-k8s-webhook-server-6998585d5-zk9xk\" (UID: \"79869a5f-b9a3-46e0-bac7-9ff9ac72b16c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.298243 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-zr8xl"] Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.308450 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-metrics-certs\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.308509 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-memberlist\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:29 crc kubenswrapper[4796]: E1125 14:39:29.308668 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 14:39:29 crc kubenswrapper[4796]: E1125 14:39:29.308731 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-memberlist podName:7f037a6b-9e7f-401d-b4db-98132fb0f9b2 nodeName:}" failed. No retries permitted until 2025-11-25 14:39:30.308712263 +0000 UTC m=+898.651821697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-memberlist") pod "speaker-kq8m7" (UID: "7f037a6b-9e7f-401d-b4db-98132fb0f9b2") : secret "metallb-memberlist" not found Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.313222 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-metrics-certs\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:29 crc kubenswrapper[4796]: W1125 14:39:29.327711 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1979dccd_b017_42f5_9fe1_8717af3f948a.slice/crio-92cd5d1bde98187292ccc285e423bad087899cc8f63fa625038cccc241c50641 WatchSource:0}: Error finding container 92cd5d1bde98187292ccc285e423bad087899cc8f63fa625038cccc241c50641: Status 404 returned error can't find the container with id 92cd5d1bde98187292ccc285e423bad087899cc8f63fa625038cccc241c50641 Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.420636 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.666122 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk"] Nov 25 14:39:29 crc kubenswrapper[4796]: W1125 14:39:29.666844 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79869a5f_b9a3_46e0_bac7_9ff9ac72b16c.slice/crio-1b34c438cd77c7449afd1199116fc9f609a2b07069a50fef9a5ad7651a09ed15 WatchSource:0}: Error finding container 1b34c438cd77c7449afd1199116fc9f609a2b07069a50fef9a5ad7651a09ed15: Status 404 returned error can't find the container with id 1b34c438cd77c7449afd1199116fc9f609a2b07069a50fef9a5ad7651a09ed15 Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.872412 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" event={"ID":"79869a5f-b9a3-46e0-bac7-9ff9ac72b16c","Type":"ContainerStarted","Data":"1b34c438cd77c7449afd1199116fc9f609a2b07069a50fef9a5ad7651a09ed15"} Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.878142 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-zr8xl" event={"ID":"1979dccd-b017-42f5-9fe1-8717af3f948a","Type":"ContainerStarted","Data":"ea000b2aeb999ad504772f75ca5feade89b6e13ec86f6ebe6897476c21835aca"} Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.878202 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-zr8xl" event={"ID":"1979dccd-b017-42f5-9fe1-8717af3f948a","Type":"ContainerStarted","Data":"92cd5d1bde98187292ccc285e423bad087899cc8f63fa625038cccc241c50641"} Nov 25 14:39:29 crc kubenswrapper[4796]: I1125 14:39:29.879402 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerStarted","Data":"3f698a8f2f2d35ededdf3a68bb513d3a4ff9e23772f9ead3c172dd5946a1bfb0"} Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.325445 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-memberlist\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.346454 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7f037a6b-9e7f-401d-b4db-98132fb0f9b2-memberlist\") pod \"speaker-kq8m7\" (UID: \"7f037a6b-9e7f-401d-b4db-98132fb0f9b2\") " pod="metallb-system/speaker-kq8m7" Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.395065 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kq8m7" Nov 25 14:39:30 crc kubenswrapper[4796]: W1125 14:39:30.421863 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f037a6b_9e7f_401d_b4db_98132fb0f9b2.slice/crio-7468f7d94c6492242677e5ca0e38357e09e19ddce6b5895111f41de2d4429129 WatchSource:0}: Error finding container 7468f7d94c6492242677e5ca0e38357e09e19ddce6b5895111f41de2d4429129: Status 404 returned error can't find the container with id 7468f7d94c6492242677e5ca0e38357e09e19ddce6b5895111f41de2d4429129 Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.889290 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kq8m7" event={"ID":"7f037a6b-9e7f-401d-b4db-98132fb0f9b2","Type":"ContainerStarted","Data":"ee925b1050dbf7bb115d644840f0834f2d72022d185fa70b01008ef3c0e9afea"} Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.889346 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kq8m7" event={"ID":"7f037a6b-9e7f-401d-b4db-98132fb0f9b2","Type":"ContainerStarted","Data":"33488a33f8bb3a21b9fe02ccc775f0e1010eee0f4f0a8d13d706f8882b3e1fc9"} Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.889362 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kq8m7" event={"ID":"7f037a6b-9e7f-401d-b4db-98132fb0f9b2","Type":"ContainerStarted","Data":"7468f7d94c6492242677e5ca0e38357e09e19ddce6b5895111f41de2d4429129"} Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.891212 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kq8m7" Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.895241 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-zr8xl" event={"ID":"1979dccd-b017-42f5-9fe1-8717af3f948a","Type":"ContainerStarted","Data":"6a5bd2f9ac4b4a9c67dcd74212ccc7105fdb682f79e5735e07e1c9e11e924ede"} Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.895385 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.908498 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kq8m7" podStartSLOduration=2.9084827989999997 podStartE2EDuration="2.908482799s" podCreationTimestamp="2025-11-25 14:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:39:30.905837267 +0000 UTC m=+899.248946701" watchObservedRunningTime="2025-11-25 14:39:30.908482799 +0000 UTC m=+899.251592223" Nov 25 14:39:30 crc kubenswrapper[4796]: I1125 14:39:30.926888 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-zr8xl" podStartSLOduration=2.926864636 podStartE2EDuration="2.926864636s" podCreationTimestamp="2025-11-25 14:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:39:30.921347676 +0000 UTC m=+899.264457120" watchObservedRunningTime="2025-11-25 14:39:30.926864636 +0000 UTC m=+899.269974060" Nov 25 14:39:36 crc kubenswrapper[4796]: I1125 14:39:36.947657 4796 generic.go:334] "Generic (PLEG): container finished" podID="4fc70054-d9cd-4545-b9e7-d6665887e94d" containerID="6bf862ac762b5727c1ff645792de83842dc557bb340b7e8889ff766fea7f4d53" exitCode=0 Nov 25 14:39:36 crc kubenswrapper[4796]: I1125 14:39:36.947784 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerDied","Data":"6bf862ac762b5727c1ff645792de83842dc557bb340b7e8889ff766fea7f4d53"} Nov 25 14:39:36 crc kubenswrapper[4796]: I1125 14:39:36.950385 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" event={"ID":"79869a5f-b9a3-46e0-bac7-9ff9ac72b16c","Type":"ContainerStarted","Data":"aeaa4823c5ded998b2672e308f6cc17d37f68c11b21835e04c0943cb7f6b30a9"} Nov 25 14:39:36 crc kubenswrapper[4796]: I1125 14:39:36.950907 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:37 crc kubenswrapper[4796]: I1125 14:39:37.010801 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" podStartSLOduration=2.477187125 podStartE2EDuration="9.010764939s" podCreationTimestamp="2025-11-25 14:39:28 +0000 UTC" firstStartedPulling="2025-11-25 14:39:29.669072808 +0000 UTC m=+898.012182232" lastFinishedPulling="2025-11-25 14:39:36.202650622 +0000 UTC m=+904.545760046" observedRunningTime="2025-11-25 14:39:37.005521677 +0000 UTC m=+905.348631121" watchObservedRunningTime="2025-11-25 14:39:37.010764939 +0000 UTC m=+905.353874403" Nov 25 14:39:37 crc kubenswrapper[4796]: I1125 14:39:37.957982 4796 generic.go:334] "Generic (PLEG): container finished" podID="4fc70054-d9cd-4545-b9e7-d6665887e94d" containerID="70c3e808cc9f6da6a81941b0b982f8cd73afa1401c212d4ed1502fdb68befeb9" exitCode=0 Nov 25 14:39:37 crc kubenswrapper[4796]: I1125 14:39:37.958066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerDied","Data":"70c3e808cc9f6da6a81941b0b982f8cd73afa1401c212d4ed1502fdb68befeb9"} Nov 25 14:39:38 crc kubenswrapper[4796]: I1125 14:39:38.965817 4796 generic.go:334] "Generic (PLEG): container finished" podID="4fc70054-d9cd-4545-b9e7-d6665887e94d" containerID="8a25065b98575d04e5387e1e425deb13c6ef4e1bcd10566dabf800cdf035b479" exitCode=0 Nov 25 14:39:38 crc kubenswrapper[4796]: I1125 14:39:38.965892 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerDied","Data":"8a25065b98575d04e5387e1e425deb13c6ef4e1bcd10566dabf800cdf035b479"} Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.262751 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fbrsl"] Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.264172 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.272432 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbrsl"] Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.369918 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-catalog-content\") pod \"redhat-marketplace-fbrsl\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.370232 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-utilities\") pod \"redhat-marketplace-fbrsl\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.370282 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6fn\" (UniqueName: \"kubernetes.io/projected/47b0a909-856c-4bb2-9246-b467a0af9bb1-kube-api-access-xn6fn\") pod \"redhat-marketplace-fbrsl\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.471470 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-catalog-content\") pod \"redhat-marketplace-fbrsl\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.471541 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-utilities\") pod \"redhat-marketplace-fbrsl\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.471621 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6fn\" (UniqueName: \"kubernetes.io/projected/47b0a909-856c-4bb2-9246-b467a0af9bb1-kube-api-access-xn6fn\") pod \"redhat-marketplace-fbrsl\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.472056 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-catalog-content\") pod \"redhat-marketplace-fbrsl\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.472115 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-utilities\") pod \"redhat-marketplace-fbrsl\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.492535 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6fn\" (UniqueName: \"kubernetes.io/projected/47b0a909-856c-4bb2-9246-b467a0af9bb1-kube-api-access-xn6fn\") pod \"redhat-marketplace-fbrsl\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:39 crc kubenswrapper[4796]: I1125 14:39:39.644384 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:40 crc kubenswrapper[4796]: I1125 14:39:39.999888 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerStarted","Data":"ddde274dccbf54b271de2a05ed35fac6be25f7b4803597f1e1b19eb863e16a5d"} Nov 25 14:39:40 crc kubenswrapper[4796]: I1125 14:39:40.000566 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerStarted","Data":"62fb5a5f177a47d964bbc5b4514081c708a4cacac37f70c7d99049e44767d87a"} Nov 25 14:39:40 crc kubenswrapper[4796]: I1125 14:39:40.000624 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerStarted","Data":"a213592e20a3236f790eabf7739bf73e24cb487582afefa338e3d5525efbe7c2"} Nov 25 14:39:40 crc kubenswrapper[4796]: I1125 14:39:40.000635 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerStarted","Data":"889bd1fcd63a8adfd09948c851a4af505f32152c5b0a276558ebac2c46637952"} Nov 25 14:39:40 crc kubenswrapper[4796]: I1125 14:39:40.129464 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbrsl"] Nov 25 14:39:40 crc kubenswrapper[4796]: I1125 14:39:40.400792 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kq8m7" Nov 25 14:39:41 crc kubenswrapper[4796]: I1125 14:39:41.008097 4796 generic.go:334] "Generic (PLEG): container finished" podID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerID="ac85689b2bfb3fd62923844d727ee52e00f66e4a1ed247530d006d31fc6b99c2" exitCode=0 Nov 25 14:39:41 crc kubenswrapper[4796]: I1125 14:39:41.008180 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbrsl" event={"ID":"47b0a909-856c-4bb2-9246-b467a0af9bb1","Type":"ContainerDied","Data":"ac85689b2bfb3fd62923844d727ee52e00f66e4a1ed247530d006d31fc6b99c2"} Nov 25 14:39:41 crc kubenswrapper[4796]: I1125 14:39:41.008208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbrsl" event={"ID":"47b0a909-856c-4bb2-9246-b467a0af9bb1","Type":"ContainerStarted","Data":"7714965fa5b303f1d02afcb96f4a2bd53bc3235d4fa82e44f72758399be72bd0"} Nov 25 14:39:41 crc kubenswrapper[4796]: I1125 14:39:41.012999 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerStarted","Data":"2c22de6787a305f7e58d75526a22dd419e65d72ed393a5eee31ed910f0374ab2"} Nov 25 14:39:41 crc kubenswrapper[4796]: I1125 14:39:41.013040 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vhclt" event={"ID":"4fc70054-d9cd-4545-b9e7-d6665887e94d","Type":"ContainerStarted","Data":"4aac2dba6ec3f5e1fdfc47b356cc2c59306b341457f7cff76beb79e352d9b903"} Nov 25 14:39:41 crc kubenswrapper[4796]: I1125 14:39:41.013189 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:41 crc kubenswrapper[4796]: I1125 14:39:41.048836 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-vhclt" podStartSLOduration=5.926371287 podStartE2EDuration="13.048812305s" podCreationTimestamp="2025-11-25 14:39:28 +0000 UTC" firstStartedPulling="2025-11-25 14:39:29.056557533 +0000 UTC m=+897.399666957" lastFinishedPulling="2025-11-25 14:39:36.178998551 +0000 UTC m=+904.522107975" observedRunningTime="2025-11-25 14:39:41.046183283 +0000 UTC m=+909.389292707" watchObservedRunningTime="2025-11-25 14:39:41.048812305 +0000 UTC m=+909.391921749" Nov 25 14:39:42 crc kubenswrapper[4796]: I1125 14:39:42.021055 4796 generic.go:334] "Generic (PLEG): container finished" podID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerID="eef0730259ca417e56315c4d2bae733e35476adf9511d281b7f0f7b4cf9fada4" exitCode=0 Nov 25 14:39:42 crc kubenswrapper[4796]: I1125 14:39:42.021208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbrsl" event={"ID":"47b0a909-856c-4bb2-9246-b467a0af9bb1","Type":"ContainerDied","Data":"eef0730259ca417e56315c4d2bae733e35476adf9511d281b7f0f7b4cf9fada4"} Nov 25 14:39:43 crc kubenswrapper[4796]: I1125 14:39:43.029861 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbrsl" event={"ID":"47b0a909-856c-4bb2-9246-b467a0af9bb1","Type":"ContainerStarted","Data":"419ba50180c95cea1bfbc170935b7026ac5c73f9382f5673fb35b42ef5be940e"} Nov 25 14:39:43 crc kubenswrapper[4796]: I1125 14:39:43.051503 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fbrsl" podStartSLOduration=2.595863595 podStartE2EDuration="4.051467336s" podCreationTimestamp="2025-11-25 14:39:39 +0000 UTC" firstStartedPulling="2025-11-25 14:39:41.009770908 +0000 UTC m=+909.352880332" lastFinishedPulling="2025-11-25 14:39:42.465374649 +0000 UTC m=+910.808484073" observedRunningTime="2025-11-25 14:39:43.04674345 +0000 UTC m=+911.389852914" watchObservedRunningTime="2025-11-25 14:39:43.051467336 +0000 UTC m=+911.394576810" Nov 25 14:39:43 crc kubenswrapper[4796]: I1125 14:39:43.844977 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:43 crc kubenswrapper[4796]: I1125 14:39:43.897598 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-vhclt" Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.641252 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qwxzx"] Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.642568 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qwxzx" Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.646859 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.646986 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.647492 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5tl5s" Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.660050 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qwxzx"] Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.717527 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cj72\" (UniqueName: \"kubernetes.io/projected/3debb5d9-f484-4de3-aa2d-f610270b8584-kube-api-access-8cj72\") pod \"openstack-operator-index-qwxzx\" (UID: \"3debb5d9-f484-4de3-aa2d-f610270b8584\") " pod="openstack-operators/openstack-operator-index-qwxzx" Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.818798 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cj72\" (UniqueName: \"kubernetes.io/projected/3debb5d9-f484-4de3-aa2d-f610270b8584-kube-api-access-8cj72\") pod \"openstack-operator-index-qwxzx\" (UID: \"3debb5d9-f484-4de3-aa2d-f610270b8584\") " pod="openstack-operators/openstack-operator-index-qwxzx" Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.837701 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cj72\" (UniqueName: \"kubernetes.io/projected/3debb5d9-f484-4de3-aa2d-f610270b8584-kube-api-access-8cj72\") pod \"openstack-operator-index-qwxzx\" (UID: \"3debb5d9-f484-4de3-aa2d-f610270b8584\") " pod="openstack-operators/openstack-operator-index-qwxzx" Nov 25 14:39:46 crc kubenswrapper[4796]: I1125 14:39:46.963510 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qwxzx" Nov 25 14:39:47 crc kubenswrapper[4796]: I1125 14:39:47.399312 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qwxzx"] Nov 25 14:39:47 crc kubenswrapper[4796]: W1125 14:39:47.407132 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3debb5d9_f484_4de3_aa2d_f610270b8584.slice/crio-90d6f4e3f610d80b01c671878491a0049379db735ead02a8c56c9c9100926ecf WatchSource:0}: Error finding container 90d6f4e3f610d80b01c671878491a0049379db735ead02a8c56c9c9100926ecf: Status 404 returned error can't find the container with id 90d6f4e3f610d80b01c671878491a0049379db735ead02a8c56c9c9100926ecf Nov 25 14:39:48 crc kubenswrapper[4796]: I1125 14:39:48.061036 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qwxzx" event={"ID":"3debb5d9-f484-4de3-aa2d-f610270b8584","Type":"ContainerStarted","Data":"90d6f4e3f610d80b01c671878491a0049379db735ead02a8c56c9c9100926ecf"} Nov 25 14:39:48 crc kubenswrapper[4796]: I1125 14:39:48.913151 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-zr8xl" Nov 25 14:39:49 crc kubenswrapper[4796]: I1125 14:39:49.424411 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zk9xk" Nov 25 14:39:49 crc kubenswrapper[4796]: I1125 14:39:49.513977 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:39:49 crc kubenswrapper[4796]: I1125 14:39:49.514037 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:39:49 crc kubenswrapper[4796]: I1125 14:39:49.645249 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:49 crc kubenswrapper[4796]: I1125 14:39:49.645799 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:49 crc kubenswrapper[4796]: I1125 14:39:49.697782 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:50 crc kubenswrapper[4796]: I1125 14:39:50.118878 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:51 crc kubenswrapper[4796]: I1125 14:39:51.229496 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qwxzx"] Nov 25 14:39:51 crc kubenswrapper[4796]: I1125 14:39:51.837837 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-28ljh"] Nov 25 14:39:51 crc kubenswrapper[4796]: I1125 14:39:51.839278 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-28ljh" Nov 25 14:39:51 crc kubenswrapper[4796]: I1125 14:39:51.847715 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-28ljh"] Nov 25 14:39:51 crc kubenswrapper[4796]: I1125 14:39:51.993357 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdfj5\" (UniqueName: \"kubernetes.io/projected/edc88d92-5818-49e5-877c-5efd6a8e1912-kube-api-access-xdfj5\") pod \"openstack-operator-index-28ljh\" (UID: \"edc88d92-5818-49e5-877c-5efd6a8e1912\") " pod="openstack-operators/openstack-operator-index-28ljh" Nov 25 14:39:52 crc kubenswrapper[4796]: I1125 14:39:52.095419 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdfj5\" (UniqueName: \"kubernetes.io/projected/edc88d92-5818-49e5-877c-5efd6a8e1912-kube-api-access-xdfj5\") pod \"openstack-operator-index-28ljh\" (UID: \"edc88d92-5818-49e5-877c-5efd6a8e1912\") " pod="openstack-operators/openstack-operator-index-28ljh" Nov 25 14:39:52 crc kubenswrapper[4796]: I1125 14:39:52.119535 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdfj5\" (UniqueName: \"kubernetes.io/projected/edc88d92-5818-49e5-877c-5efd6a8e1912-kube-api-access-xdfj5\") pod \"openstack-operator-index-28ljh\" (UID: \"edc88d92-5818-49e5-877c-5efd6a8e1912\") " pod="openstack-operators/openstack-operator-index-28ljh" Nov 25 14:39:52 crc kubenswrapper[4796]: I1125 14:39:52.161664 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-28ljh" Nov 25 14:39:52 crc kubenswrapper[4796]: I1125 14:39:52.581458 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-28ljh"] Nov 25 14:39:52 crc kubenswrapper[4796]: W1125 14:39:52.586084 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedc88d92_5818_49e5_877c_5efd6a8e1912.slice/crio-43e4e970db26fbd0fe1c37049bfe76bb83d8967bcd67d77987a302241c90c3c1 WatchSource:0}: Error finding container 43e4e970db26fbd0fe1c37049bfe76bb83d8967bcd67d77987a302241c90c3c1: Status 404 returned error can't find the container with id 43e4e970db26fbd0fe1c37049bfe76bb83d8967bcd67d77987a302241c90c3c1 Nov 25 14:39:53 crc kubenswrapper[4796]: I1125 14:39:53.108065 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-28ljh" event={"ID":"edc88d92-5818-49e5-877c-5efd6a8e1912","Type":"ContainerStarted","Data":"43e4e970db26fbd0fe1c37049bfe76bb83d8967bcd67d77987a302241c90c3c1"} Nov 25 14:39:53 crc kubenswrapper[4796]: I1125 14:39:53.436452 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbrsl"] Nov 25 14:39:53 crc kubenswrapper[4796]: I1125 14:39:53.436714 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fbrsl" podUID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerName="registry-server" containerID="cri-o://419ba50180c95cea1bfbc170935b7026ac5c73f9382f5673fb35b42ef5be940e" gracePeriod=2 Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.115985 4796 generic.go:334] "Generic (PLEG): container finished" podID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerID="419ba50180c95cea1bfbc170935b7026ac5c73f9382f5673fb35b42ef5be940e" exitCode=0 Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.116073 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbrsl" event={"ID":"47b0a909-856c-4bb2-9246-b467a0af9bb1","Type":"ContainerDied","Data":"419ba50180c95cea1bfbc170935b7026ac5c73f9382f5673fb35b42ef5be940e"} Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.184046 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.327850 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6fn\" (UniqueName: \"kubernetes.io/projected/47b0a909-856c-4bb2-9246-b467a0af9bb1-kube-api-access-xn6fn\") pod \"47b0a909-856c-4bb2-9246-b467a0af9bb1\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.327987 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-catalog-content\") pod \"47b0a909-856c-4bb2-9246-b467a0af9bb1\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.328016 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-utilities\") pod \"47b0a909-856c-4bb2-9246-b467a0af9bb1\" (UID: \"47b0a909-856c-4bb2-9246-b467a0af9bb1\") " Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.329544 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-utilities" (OuterVolumeSpecName: "utilities") pod "47b0a909-856c-4bb2-9246-b467a0af9bb1" (UID: "47b0a909-856c-4bb2-9246-b467a0af9bb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.333903 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b0a909-856c-4bb2-9246-b467a0af9bb1-kube-api-access-xn6fn" (OuterVolumeSpecName: "kube-api-access-xn6fn") pod "47b0a909-856c-4bb2-9246-b467a0af9bb1" (UID: "47b0a909-856c-4bb2-9246-b467a0af9bb1"). InnerVolumeSpecName "kube-api-access-xn6fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.346595 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47b0a909-856c-4bb2-9246-b467a0af9bb1" (UID: "47b0a909-856c-4bb2-9246-b467a0af9bb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.429513 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.429607 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b0a909-856c-4bb2-9246-b467a0af9bb1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:39:54 crc kubenswrapper[4796]: I1125 14:39:54.429638 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn6fn\" (UniqueName: \"kubernetes.io/projected/47b0a909-856c-4bb2-9246-b467a0af9bb1-kube-api-access-xn6fn\") on node \"crc\" DevicePath \"\"" Nov 25 14:39:55 crc kubenswrapper[4796]: I1125 14:39:55.125655 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbrsl" event={"ID":"47b0a909-856c-4bb2-9246-b467a0af9bb1","Type":"ContainerDied","Data":"7714965fa5b303f1d02afcb96f4a2bd53bc3235d4fa82e44f72758399be72bd0"} Nov 25 14:39:55 crc kubenswrapper[4796]: I1125 14:39:55.125975 4796 scope.go:117] "RemoveContainer" containerID="419ba50180c95cea1bfbc170935b7026ac5c73f9382f5673fb35b42ef5be940e" Nov 25 14:39:55 crc kubenswrapper[4796]: I1125 14:39:55.125723 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbrsl" Nov 25 14:39:55 crc kubenswrapper[4796]: I1125 14:39:55.148234 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbrsl"] Nov 25 14:39:55 crc kubenswrapper[4796]: I1125 14:39:55.155428 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbrsl"] Nov 25 14:39:55 crc kubenswrapper[4796]: I1125 14:39:55.292208 4796 scope.go:117] "RemoveContainer" containerID="eef0730259ca417e56315c4d2bae733e35476adf9511d281b7f0f7b4cf9fada4" Nov 25 14:39:56 crc kubenswrapper[4796]: I1125 14:39:56.020969 4796 scope.go:117] "RemoveContainer" containerID="ac85689b2bfb3fd62923844d727ee52e00f66e4a1ed247530d006d31fc6b99c2" Nov 25 14:39:56 crc kubenswrapper[4796]: I1125 14:39:56.423182 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b0a909-856c-4bb2-9246-b467a0af9bb1" path="/var/lib/kubelet/pods/47b0a909-856c-4bb2-9246-b467a0af9bb1/volumes" Nov 25 14:39:57 crc kubenswrapper[4796]: I1125 14:39:57.139961 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-28ljh" event={"ID":"edc88d92-5818-49e5-877c-5efd6a8e1912","Type":"ContainerStarted","Data":"1f7301af6d1409eeed0ab9de3207621f59f2f546bc343d2753ffa1cb6e10213d"} Nov 25 14:39:57 crc kubenswrapper[4796]: I1125 14:39:57.142326 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qwxzx" event={"ID":"3debb5d9-f484-4de3-aa2d-f610270b8584","Type":"ContainerStarted","Data":"35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c"} Nov 25 14:39:57 crc kubenswrapper[4796]: I1125 14:39:57.142433 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-qwxzx" podUID="3debb5d9-f484-4de3-aa2d-f610270b8584" containerName="registry-server" containerID="cri-o://35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c" gracePeriod=2 Nov 25 14:39:57 crc kubenswrapper[4796]: I1125 14:39:57.155151 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-28ljh" podStartSLOduration=2.665171297 podStartE2EDuration="6.155133169s" podCreationTimestamp="2025-11-25 14:39:51 +0000 UTC" firstStartedPulling="2025-11-25 14:39:52.589086431 +0000 UTC m=+920.932195845" lastFinishedPulling="2025-11-25 14:39:56.079048293 +0000 UTC m=+924.422157717" observedRunningTime="2025-11-25 14:39:57.153178698 +0000 UTC m=+925.496288122" watchObservedRunningTime="2025-11-25 14:39:57.155133169 +0000 UTC m=+925.498242593" Nov 25 14:39:57 crc kubenswrapper[4796]: I1125 14:39:57.170163 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qwxzx" podStartSLOduration=2.49488297 podStartE2EDuration="11.170141292s" podCreationTimestamp="2025-11-25 14:39:46 +0000 UTC" firstStartedPulling="2025-11-25 14:39:47.409500508 +0000 UTC m=+915.752609932" lastFinishedPulling="2025-11-25 14:39:56.08475883 +0000 UTC m=+924.427868254" observedRunningTime="2025-11-25 14:39:57.167519041 +0000 UTC m=+925.510628475" watchObservedRunningTime="2025-11-25 14:39:57.170141292 +0000 UTC m=+925.513250716" Nov 25 14:39:57 crc kubenswrapper[4796]: I1125 14:39:57.487623 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qwxzx" Nov 25 14:39:57 crc kubenswrapper[4796]: I1125 14:39:57.576254 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cj72\" (UniqueName: \"kubernetes.io/projected/3debb5d9-f484-4de3-aa2d-f610270b8584-kube-api-access-8cj72\") pod \"3debb5d9-f484-4de3-aa2d-f610270b8584\" (UID: \"3debb5d9-f484-4de3-aa2d-f610270b8584\") " Nov 25 14:39:57 crc kubenswrapper[4796]: I1125 14:39:57.582061 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3debb5d9-f484-4de3-aa2d-f610270b8584-kube-api-access-8cj72" (OuterVolumeSpecName: "kube-api-access-8cj72") pod "3debb5d9-f484-4de3-aa2d-f610270b8584" (UID: "3debb5d9-f484-4de3-aa2d-f610270b8584"). InnerVolumeSpecName "kube-api-access-8cj72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:39:57 crc kubenswrapper[4796]: I1125 14:39:57.677408 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cj72\" (UniqueName: \"kubernetes.io/projected/3debb5d9-f484-4de3-aa2d-f610270b8584-kube-api-access-8cj72\") on node \"crc\" DevicePath \"\"" Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.149673 4796 generic.go:334] "Generic (PLEG): container finished" podID="3debb5d9-f484-4de3-aa2d-f610270b8584" containerID="35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c" exitCode=0 Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.149716 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qwxzx" event={"ID":"3debb5d9-f484-4de3-aa2d-f610270b8584","Type":"ContainerDied","Data":"35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c"} Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.149748 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qwxzx" Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.149772 4796 scope.go:117] "RemoveContainer" containerID="35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c" Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.149757 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qwxzx" event={"ID":"3debb5d9-f484-4de3-aa2d-f610270b8584","Type":"ContainerDied","Data":"90d6f4e3f610d80b01c671878491a0049379db735ead02a8c56c9c9100926ecf"} Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.174184 4796 scope.go:117] "RemoveContainer" containerID="35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c" Nov 25 14:39:58 crc kubenswrapper[4796]: E1125 14:39:58.174904 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c\": container with ID starting with 35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c not found: ID does not exist" containerID="35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c" Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.174945 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c"} err="failed to get container status \"35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c\": rpc error: code = NotFound desc = could not find container \"35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c\": container with ID starting with 35eb66d133d2e5adecdb08d2e2d413b0378607d60550bdecffe6cc33d960fd1c not found: ID does not exist" Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.187659 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qwxzx"] Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.193681 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-qwxzx"] Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.417675 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3debb5d9-f484-4de3-aa2d-f610270b8584" path="/var/lib/kubelet/pods/3debb5d9-f484-4de3-aa2d-f610270b8584/volumes" Nov 25 14:39:58 crc kubenswrapper[4796]: I1125 14:39:58.849708 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-vhclt" Nov 25 14:40:02 crc kubenswrapper[4796]: I1125 14:40:02.162455 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-28ljh" Nov 25 14:40:02 crc kubenswrapper[4796]: I1125 14:40:02.162874 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-28ljh" Nov 25 14:40:02 crc kubenswrapper[4796]: I1125 14:40:02.191827 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-28ljh" Nov 25 14:40:02 crc kubenswrapper[4796]: I1125 14:40:02.231149 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-28ljh" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.885505 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f"] Nov 25 14:40:03 crc kubenswrapper[4796]: E1125 14:40:03.886088 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerName="registry-server" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.886102 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerName="registry-server" Nov 25 14:40:03 crc kubenswrapper[4796]: E1125 14:40:03.886120 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3debb5d9-f484-4de3-aa2d-f610270b8584" containerName="registry-server" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.886129 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3debb5d9-f484-4de3-aa2d-f610270b8584" containerName="registry-server" Nov 25 14:40:03 crc kubenswrapper[4796]: E1125 14:40:03.886149 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerName="extract-content" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.886157 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerName="extract-content" Nov 25 14:40:03 crc kubenswrapper[4796]: E1125 14:40:03.886169 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerName="extract-utilities" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.886176 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerName="extract-utilities" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.886301 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b0a909-856c-4bb2-9246-b467a0af9bb1" containerName="registry-server" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.886321 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3debb5d9-f484-4de3-aa2d-f610270b8584" containerName="registry-server" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.887285 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.890144 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rbg2g" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.969822 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-bundle\") pod \"e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.970195 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x4pt\" (UniqueName: \"kubernetes.io/projected/be76be41-9513-40eb-9140-8d3f2ab3a05d-kube-api-access-9x4pt\") pod \"e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.970323 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-util\") pod \"e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:03 crc kubenswrapper[4796]: I1125 14:40:03.971600 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f"] Nov 25 14:40:04 crc kubenswrapper[4796]: I1125 14:40:04.071754 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-bundle\") pod \"e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:04 crc kubenswrapper[4796]: I1125 14:40:04.072142 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x4pt\" (UniqueName: \"kubernetes.io/projected/be76be41-9513-40eb-9140-8d3f2ab3a05d-kube-api-access-9x4pt\") pod \"e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:04 crc kubenswrapper[4796]: I1125 14:40:04.072210 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-bundle\") pod \"e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:04 crc kubenswrapper[4796]: I1125 14:40:04.072375 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-util\") pod \"e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:04 crc kubenswrapper[4796]: I1125 14:40:04.073272 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-util\") pod \"e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:04 crc kubenswrapper[4796]: I1125 14:40:04.090194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x4pt\" (UniqueName: \"kubernetes.io/projected/be76be41-9513-40eb-9140-8d3f2ab3a05d-kube-api-access-9x4pt\") pod \"e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:04 crc kubenswrapper[4796]: I1125 14:40:04.231535 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:04 crc kubenswrapper[4796]: I1125 14:40:04.697919 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f"] Nov 25 14:40:05 crc kubenswrapper[4796]: I1125 14:40:05.199258 4796 generic.go:334] "Generic (PLEG): container finished" podID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerID="effa722aaf8fec6a67a60b750018378196cbc44bc1d1c46decc9fa2bd2a3b6d1" exitCode=0 Nov 25 14:40:05 crc kubenswrapper[4796]: I1125 14:40:05.199304 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" event={"ID":"be76be41-9513-40eb-9140-8d3f2ab3a05d","Type":"ContainerDied","Data":"effa722aaf8fec6a67a60b750018378196cbc44bc1d1c46decc9fa2bd2a3b6d1"} Nov 25 14:40:05 crc kubenswrapper[4796]: I1125 14:40:05.199330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" event={"ID":"be76be41-9513-40eb-9140-8d3f2ab3a05d","Type":"ContainerStarted","Data":"18f12c1b8a56ac95eabdaa0a86fe2a68b1ce7ca07c470ea5094b1fa2d79ed7ad"} Nov 25 14:40:06 crc kubenswrapper[4796]: I1125 14:40:06.208027 4796 generic.go:334] "Generic (PLEG): container finished" podID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerID="fbba6b4900061a4624e58edf59ec0371bc5d5a29501ab45787d297536c4a97ed" exitCode=0 Nov 25 14:40:06 crc kubenswrapper[4796]: I1125 14:40:06.208115 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" event={"ID":"be76be41-9513-40eb-9140-8d3f2ab3a05d","Type":"ContainerDied","Data":"fbba6b4900061a4624e58edf59ec0371bc5d5a29501ab45787d297536c4a97ed"} Nov 25 14:40:07 crc kubenswrapper[4796]: I1125 14:40:07.220014 4796 generic.go:334] "Generic (PLEG): container finished" podID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerID="8c24199c992a75dea75c6184dcd512cc087d4afd0e2d8dc2de4233a5c7054ec2" exitCode=0 Nov 25 14:40:07 crc kubenswrapper[4796]: I1125 14:40:07.220191 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" event={"ID":"be76be41-9513-40eb-9140-8d3f2ab3a05d","Type":"ContainerDied","Data":"8c24199c992a75dea75c6184dcd512cc087d4afd0e2d8dc2de4233a5c7054ec2"} Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.576335 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.734768 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x4pt\" (UniqueName: \"kubernetes.io/projected/be76be41-9513-40eb-9140-8d3f2ab3a05d-kube-api-access-9x4pt\") pod \"be76be41-9513-40eb-9140-8d3f2ab3a05d\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.734872 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-util\") pod \"be76be41-9513-40eb-9140-8d3f2ab3a05d\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.734947 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-bundle\") pod \"be76be41-9513-40eb-9140-8d3f2ab3a05d\" (UID: \"be76be41-9513-40eb-9140-8d3f2ab3a05d\") " Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.736208 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-bundle" (OuterVolumeSpecName: "bundle") pod "be76be41-9513-40eb-9140-8d3f2ab3a05d" (UID: "be76be41-9513-40eb-9140-8d3f2ab3a05d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.744216 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be76be41-9513-40eb-9140-8d3f2ab3a05d-kube-api-access-9x4pt" (OuterVolumeSpecName: "kube-api-access-9x4pt") pod "be76be41-9513-40eb-9140-8d3f2ab3a05d" (UID: "be76be41-9513-40eb-9140-8d3f2ab3a05d"). InnerVolumeSpecName "kube-api-access-9x4pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.766481 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-util" (OuterVolumeSpecName: "util") pod "be76be41-9513-40eb-9140-8d3f2ab3a05d" (UID: "be76be41-9513-40eb-9140-8d3f2ab3a05d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.837694 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x4pt\" (UniqueName: \"kubernetes.io/projected/be76be41-9513-40eb-9140-8d3f2ab3a05d-kube-api-access-9x4pt\") on node \"crc\" DevicePath \"\"" Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.837738 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-util\") on node \"crc\" DevicePath \"\"" Nov 25 14:40:08 crc kubenswrapper[4796]: I1125 14:40:08.837751 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/be76be41-9513-40eb-9140-8d3f2ab3a05d-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:40:09 crc kubenswrapper[4796]: I1125 14:40:09.239547 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" event={"ID":"be76be41-9513-40eb-9140-8d3f2ab3a05d","Type":"ContainerDied","Data":"18f12c1b8a56ac95eabdaa0a86fe2a68b1ce7ca07c470ea5094b1fa2d79ed7ad"} Nov 25 14:40:09 crc kubenswrapper[4796]: I1125 14:40:09.239867 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18f12c1b8a56ac95eabdaa0a86fe2a68b1ce7ca07c470ea5094b1fa2d79ed7ad" Nov 25 14:40:09 crc kubenswrapper[4796]: I1125 14:40:09.239666 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.120746 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd"] Nov 25 14:40:12 crc kubenswrapper[4796]: E1125 14:40:12.121345 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerName="pull" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.121378 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerName="pull" Nov 25 14:40:12 crc kubenswrapper[4796]: E1125 14:40:12.121392 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerName="extract" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.121399 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerName="extract" Nov 25 14:40:12 crc kubenswrapper[4796]: E1125 14:40:12.121414 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerName="util" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.121422 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerName="util" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.121599 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="be76be41-9513-40eb-9140-8d3f2ab3a05d" containerName="extract" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.122056 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.123803 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-jtqq2" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.158303 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd"] Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.286975 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvc99\" (UniqueName: \"kubernetes.io/projected/742f74a5-8ef5-42df-8644-16b6209f5172-kube-api-access-tvc99\") pod \"openstack-operator-controller-operator-5fd4b8b4b5-s2rpd\" (UID: \"742f74a5-8ef5-42df-8644-16b6209f5172\") " pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.387674 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvc99\" (UniqueName: \"kubernetes.io/projected/742f74a5-8ef5-42df-8644-16b6209f5172-kube-api-access-tvc99\") pod \"openstack-operator-controller-operator-5fd4b8b4b5-s2rpd\" (UID: \"742f74a5-8ef5-42df-8644-16b6209f5172\") " pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.404354 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvc99\" (UniqueName: \"kubernetes.io/projected/742f74a5-8ef5-42df-8644-16b6209f5172-kube-api-access-tvc99\") pod \"openstack-operator-controller-operator-5fd4b8b4b5-s2rpd\" (UID: \"742f74a5-8ef5-42df-8644-16b6209f5172\") " pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.439751 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" Nov 25 14:40:12 crc kubenswrapper[4796]: I1125 14:40:12.703281 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd"] Nov 25 14:40:12 crc kubenswrapper[4796]: W1125 14:40:12.712559 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod742f74a5_8ef5_42df_8644_16b6209f5172.slice/crio-f3a8c23922313cd7a3a66b6edd9d4ac90a65917990a27f16f8f4689afdfc7b93 WatchSource:0}: Error finding container f3a8c23922313cd7a3a66b6edd9d4ac90a65917990a27f16f8f4689afdfc7b93: Status 404 returned error can't find the container with id f3a8c23922313cd7a3a66b6edd9d4ac90a65917990a27f16f8f4689afdfc7b93 Nov 25 14:40:13 crc kubenswrapper[4796]: I1125 14:40:13.279047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" event={"ID":"742f74a5-8ef5-42df-8644-16b6209f5172","Type":"ContainerStarted","Data":"f3a8c23922313cd7a3a66b6edd9d4ac90a65917990a27f16f8f4689afdfc7b93"} Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.440735 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdmqv"] Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.442100 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.458853 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdmqv"] Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.527251 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-catalog-content\") pod \"certified-operators-fdmqv\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.527287 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9k2\" (UniqueName: \"kubernetes.io/projected/2066409e-fcc1-4c0d-a581-97541778b188-kube-api-access-2p9k2\") pod \"certified-operators-fdmqv\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.527314 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-utilities\") pod \"certified-operators-fdmqv\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.629318 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-catalog-content\") pod \"certified-operators-fdmqv\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.629370 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9k2\" (UniqueName: \"kubernetes.io/projected/2066409e-fcc1-4c0d-a581-97541778b188-kube-api-access-2p9k2\") pod \"certified-operators-fdmqv\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.629405 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-utilities\") pod \"certified-operators-fdmqv\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.629785 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-catalog-content\") pod \"certified-operators-fdmqv\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.629892 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-utilities\") pod \"certified-operators-fdmqv\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.665730 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9k2\" (UniqueName: \"kubernetes.io/projected/2066409e-fcc1-4c0d-a581-97541778b188-kube-api-access-2p9k2\") pod \"certified-operators-fdmqv\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:15 crc kubenswrapper[4796]: I1125 14:40:15.769923 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:17 crc kubenswrapper[4796]: I1125 14:40:17.498438 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdmqv"] Nov 25 14:40:18 crc kubenswrapper[4796]: W1125 14:40:18.032704 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2066409e_fcc1_4c0d_a581_97541778b188.slice/crio-938be43108655ade0542e26a0cc0bd1d26ada5c14c7857bdfe555c580f8c368e WatchSource:0}: Error finding container 938be43108655ade0542e26a0cc0bd1d26ada5c14c7857bdfe555c580f8c368e: Status 404 returned error can't find the container with id 938be43108655ade0542e26a0cc0bd1d26ada5c14c7857bdfe555c580f8c368e Nov 25 14:40:18 crc kubenswrapper[4796]: I1125 14:40:18.312205 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdmqv" event={"ID":"2066409e-fcc1-4c0d-a581-97541778b188","Type":"ContainerStarted","Data":"938be43108655ade0542e26a0cc0bd1d26ada5c14c7857bdfe555c580f8c368e"} Nov 25 14:40:19 crc kubenswrapper[4796]: I1125 14:40:19.324099 4796 generic.go:334] "Generic (PLEG): container finished" podID="2066409e-fcc1-4c0d-a581-97541778b188" containerID="21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d" exitCode=0 Nov 25 14:40:19 crc kubenswrapper[4796]: I1125 14:40:19.324232 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdmqv" event={"ID":"2066409e-fcc1-4c0d-a581-97541778b188","Type":"ContainerDied","Data":"21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d"} Nov 25 14:40:19 crc kubenswrapper[4796]: I1125 14:40:19.328608 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" event={"ID":"742f74a5-8ef5-42df-8644-16b6209f5172","Type":"ContainerStarted","Data":"2e94b06e9852c6882fd04b4c61201b79f84cd15746ea1656f64b1d385b93d430"} Nov 25 14:40:19 crc kubenswrapper[4796]: I1125 14:40:19.328886 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" Nov 25 14:40:19 crc kubenswrapper[4796]: I1125 14:40:19.384246 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" podStartSLOduration=1.922178749 podStartE2EDuration="7.384227909s" podCreationTimestamp="2025-11-25 14:40:12 +0000 UTC" firstStartedPulling="2025-11-25 14:40:12.715124057 +0000 UTC m=+941.058233481" lastFinishedPulling="2025-11-25 14:40:18.177173217 +0000 UTC m=+946.520282641" observedRunningTime="2025-11-25 14:40:19.380298558 +0000 UTC m=+947.723408012" watchObservedRunningTime="2025-11-25 14:40:19.384227909 +0000 UTC m=+947.727337343" Nov 25 14:40:19 crc kubenswrapper[4796]: I1125 14:40:19.513848 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:40:19 crc kubenswrapper[4796]: I1125 14:40:19.514079 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.369842 4796 generic.go:334] "Generic (PLEG): container finished" podID="2066409e-fcc1-4c0d-a581-97541778b188" containerID="7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22" exitCode=0 Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.369944 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdmqv" event={"ID":"2066409e-fcc1-4c0d-a581-97541778b188","Type":"ContainerDied","Data":"7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22"} Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.646564 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qn59p"] Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.649696 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.664969 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qn59p"] Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.766792 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9xfr\" (UniqueName: \"kubernetes.io/projected/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-kube-api-access-z9xfr\") pod \"community-operators-qn59p\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.766848 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-utilities\") pod \"community-operators-qn59p\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.766890 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-catalog-content\") pod \"community-operators-qn59p\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.868358 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9xfr\" (UniqueName: \"kubernetes.io/projected/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-kube-api-access-z9xfr\") pod \"community-operators-qn59p\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.868423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-utilities\") pod \"community-operators-qn59p\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.868459 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-catalog-content\") pod \"community-operators-qn59p\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.869045 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-catalog-content\") pod \"community-operators-qn59p\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.869244 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-utilities\") pod \"community-operators-qn59p\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.889537 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9xfr\" (UniqueName: \"kubernetes.io/projected/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-kube-api-access-z9xfr\") pod \"community-operators-qn59p\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:23 crc kubenswrapper[4796]: I1125 14:40:23.981000 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:24 crc kubenswrapper[4796]: I1125 14:40:24.315327 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qn59p"] Nov 25 14:40:24 crc kubenswrapper[4796]: I1125 14:40:24.377147 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn59p" event={"ID":"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5","Type":"ContainerStarted","Data":"384064cd66f72241fe4f9208fc7e8681be2729be6b040252ea27478576691d86"} Nov 25 14:40:24 crc kubenswrapper[4796]: E1125 14:40:24.727826 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb4aa15_4a38_4f7c_8f9c_4218364f1dd5.slice/crio-conmon-09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb4aa15_4a38_4f7c_8f9c_4218364f1dd5.slice/crio-09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc.scope\": RecentStats: unable to find data in memory cache]" Nov 25 14:40:25 crc kubenswrapper[4796]: I1125 14:40:25.391058 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerID="09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc" exitCode=0 Nov 25 14:40:25 crc kubenswrapper[4796]: I1125 14:40:25.391130 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn59p" event={"ID":"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5","Type":"ContainerDied","Data":"09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc"} Nov 25 14:40:26 crc kubenswrapper[4796]: I1125 14:40:26.398671 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdmqv" event={"ID":"2066409e-fcc1-4c0d-a581-97541778b188","Type":"ContainerStarted","Data":"1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088"} Nov 25 14:40:26 crc kubenswrapper[4796]: I1125 14:40:26.422096 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdmqv" podStartSLOduration=5.681415396 podStartE2EDuration="11.422077053s" podCreationTimestamp="2025-11-25 14:40:15 +0000 UTC" firstStartedPulling="2025-11-25 14:40:19.326446924 +0000 UTC m=+947.669556388" lastFinishedPulling="2025-11-25 14:40:25.067108611 +0000 UTC m=+953.410218045" observedRunningTime="2025-11-25 14:40:26.421045412 +0000 UTC m=+954.764154856" watchObservedRunningTime="2025-11-25 14:40:26.422077053 +0000 UTC m=+954.765186487" Nov 25 14:40:30 crc kubenswrapper[4796]: I1125 14:40:30.442845 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerID="7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889" exitCode=0 Nov 25 14:40:30 crc kubenswrapper[4796]: I1125 14:40:30.442953 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn59p" event={"ID":"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5","Type":"ContainerDied","Data":"7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889"} Nov 25 14:40:32 crc kubenswrapper[4796]: I1125 14:40:32.445134 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5fd4b8b4b5-s2rpd" Nov 25 14:40:32 crc kubenswrapper[4796]: I1125 14:40:32.458732 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn59p" event={"ID":"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5","Type":"ContainerStarted","Data":"0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d"} Nov 25 14:40:32 crc kubenswrapper[4796]: I1125 14:40:32.494699 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qn59p" podStartSLOduration=3.26952122 podStartE2EDuration="9.494684257s" podCreationTimestamp="2025-11-25 14:40:23 +0000 UTC" firstStartedPulling="2025-11-25 14:40:25.393764884 +0000 UTC m=+953.736874338" lastFinishedPulling="2025-11-25 14:40:31.618927921 +0000 UTC m=+959.962037375" observedRunningTime="2025-11-25 14:40:32.493192821 +0000 UTC m=+960.836302255" watchObservedRunningTime="2025-11-25 14:40:32.494684257 +0000 UTC m=+960.837793681" Nov 25 14:40:33 crc kubenswrapper[4796]: I1125 14:40:33.981618 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:33 crc kubenswrapper[4796]: I1125 14:40:33.981684 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:34 crc kubenswrapper[4796]: I1125 14:40:34.026587 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:35 crc kubenswrapper[4796]: I1125 14:40:35.770294 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:35 crc kubenswrapper[4796]: I1125 14:40:35.771281 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:35 crc kubenswrapper[4796]: I1125 14:40:35.813774 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:36 crc kubenswrapper[4796]: I1125 14:40:36.547451 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:39 crc kubenswrapper[4796]: I1125 14:40:39.232071 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdmqv"] Nov 25 14:40:39 crc kubenswrapper[4796]: I1125 14:40:39.232632 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fdmqv" podUID="2066409e-fcc1-4c0d-a581-97541778b188" containerName="registry-server" containerID="cri-o://1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088" gracePeriod=2 Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.222545 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.401604 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-catalog-content\") pod \"2066409e-fcc1-4c0d-a581-97541778b188\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.401647 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p9k2\" (UniqueName: \"kubernetes.io/projected/2066409e-fcc1-4c0d-a581-97541778b188-kube-api-access-2p9k2\") pod \"2066409e-fcc1-4c0d-a581-97541778b188\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.401732 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-utilities\") pod \"2066409e-fcc1-4c0d-a581-97541778b188\" (UID: \"2066409e-fcc1-4c0d-a581-97541778b188\") " Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.402722 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-utilities" (OuterVolumeSpecName: "utilities") pod "2066409e-fcc1-4c0d-a581-97541778b188" (UID: "2066409e-fcc1-4c0d-a581-97541778b188"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.414613 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2066409e-fcc1-4c0d-a581-97541778b188-kube-api-access-2p9k2" (OuterVolumeSpecName: "kube-api-access-2p9k2") pod "2066409e-fcc1-4c0d-a581-97541778b188" (UID: "2066409e-fcc1-4c0d-a581-97541778b188"). InnerVolumeSpecName "kube-api-access-2p9k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.466044 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2066409e-fcc1-4c0d-a581-97541778b188" (UID: "2066409e-fcc1-4c0d-a581-97541778b188"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.503742 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.504028 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2066409e-fcc1-4c0d-a581-97541778b188-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.504108 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p9k2\" (UniqueName: \"kubernetes.io/projected/2066409e-fcc1-4c0d-a581-97541778b188-kube-api-access-2p9k2\") on node \"crc\" DevicePath \"\"" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.510623 4796 generic.go:334] "Generic (PLEG): container finished" podID="2066409e-fcc1-4c0d-a581-97541778b188" containerID="1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088" exitCode=0 Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.510657 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdmqv" event={"ID":"2066409e-fcc1-4c0d-a581-97541778b188","Type":"ContainerDied","Data":"1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088"} Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.510695 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdmqv" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.510713 4796 scope.go:117] "RemoveContainer" containerID="1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.510702 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdmqv" event={"ID":"2066409e-fcc1-4c0d-a581-97541778b188","Type":"ContainerDied","Data":"938be43108655ade0542e26a0cc0bd1d26ada5c14c7857bdfe555c580f8c368e"} Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.529925 4796 scope.go:117] "RemoveContainer" containerID="7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.541537 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdmqv"] Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.545068 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fdmqv"] Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.555010 4796 scope.go:117] "RemoveContainer" containerID="21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.568031 4796 scope.go:117] "RemoveContainer" containerID="1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088" Nov 25 14:40:40 crc kubenswrapper[4796]: E1125 14:40:40.568499 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088\": container with ID starting with 1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088 not found: ID does not exist" containerID="1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.568528 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088"} err="failed to get container status \"1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088\": rpc error: code = NotFound desc = could not find container \"1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088\": container with ID starting with 1018a8f97ac54527bec0d728333118ff14932a97b459d624c1942eac20aed088 not found: ID does not exist" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.568549 4796 scope.go:117] "RemoveContainer" containerID="7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22" Nov 25 14:40:40 crc kubenswrapper[4796]: E1125 14:40:40.568855 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22\": container with ID starting with 7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22 not found: ID does not exist" containerID="7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.568907 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22"} err="failed to get container status \"7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22\": rpc error: code = NotFound desc = could not find container \"7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22\": container with ID starting with 7133a055f67602f7e7e2e43d6204958df96d61cfe70b0c1ac21cb63f2745cb22 not found: ID does not exist" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.568925 4796 scope.go:117] "RemoveContainer" containerID="21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d" Nov 25 14:40:40 crc kubenswrapper[4796]: E1125 14:40:40.569355 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d\": container with ID starting with 21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d not found: ID does not exist" containerID="21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d" Nov 25 14:40:40 crc kubenswrapper[4796]: I1125 14:40:40.569473 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d"} err="failed to get container status \"21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d\": rpc error: code = NotFound desc = could not find container \"21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d\": container with ID starting with 21a97bcbf88a355ecdf3ef6facb40a1140c1062aa37c10a773b5813c87e5400d not found: ID does not exist" Nov 25 14:40:42 crc kubenswrapper[4796]: I1125 14:40:42.418434 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2066409e-fcc1-4c0d-a581-97541778b188" path="/var/lib/kubelet/pods/2066409e-fcc1-4c0d-a581-97541778b188/volumes" Nov 25 14:40:44 crc kubenswrapper[4796]: I1125 14:40:44.027474 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:46 crc kubenswrapper[4796]: I1125 14:40:46.434430 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qn59p"] Nov 25 14:40:46 crc kubenswrapper[4796]: I1125 14:40:46.434944 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qn59p" podUID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerName="registry-server" containerID="cri-o://0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d" gracePeriod=2 Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.420997 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.553756 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerID="0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d" exitCode=0 Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.553807 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn59p" event={"ID":"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5","Type":"ContainerDied","Data":"0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d"} Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.553838 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qn59p" event={"ID":"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5","Type":"ContainerDied","Data":"384064cd66f72241fe4f9208fc7e8681be2729be6b040252ea27478576691d86"} Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.553859 4796 scope.go:117] "RemoveContainer" containerID="0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.554011 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qn59p" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.571463 4796 scope.go:117] "RemoveContainer" containerID="7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.591622 4796 scope.go:117] "RemoveContainer" containerID="09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.607957 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9xfr\" (UniqueName: \"kubernetes.io/projected/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-kube-api-access-z9xfr\") pod \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.608033 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-utilities\") pod \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.608063 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-catalog-content\") pod \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\" (UID: \"6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5\") " Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.610102 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-utilities" (OuterVolumeSpecName: "utilities") pod "6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" (UID: "6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.615607 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-kube-api-access-z9xfr" (OuterVolumeSpecName: "kube-api-access-z9xfr") pod "6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" (UID: "6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5"). InnerVolumeSpecName "kube-api-access-z9xfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.615883 4796 scope.go:117] "RemoveContainer" containerID="0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d" Nov 25 14:40:47 crc kubenswrapper[4796]: E1125 14:40:47.616307 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d\": container with ID starting with 0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d not found: ID does not exist" containerID="0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.616333 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d"} err="failed to get container status \"0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d\": rpc error: code = NotFound desc = could not find container \"0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d\": container with ID starting with 0189e9b0059e34fbe3c1c4ed3f3d6644b2fc76c0ffa7bd3e28fcc479c029d80d not found: ID does not exist" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.616352 4796 scope.go:117] "RemoveContainer" containerID="7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889" Nov 25 14:40:47 crc kubenswrapper[4796]: E1125 14:40:47.616810 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889\": container with ID starting with 7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889 not found: ID does not exist" containerID="7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.616833 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889"} err="failed to get container status \"7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889\": rpc error: code = NotFound desc = could not find container \"7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889\": container with ID starting with 7566ecc45a2ec929ac0ec83be59fc6f78d014cf4970861a20d580923b5e2b889 not found: ID does not exist" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.616846 4796 scope.go:117] "RemoveContainer" containerID="09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc" Nov 25 14:40:47 crc kubenswrapper[4796]: E1125 14:40:47.617062 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc\": container with ID starting with 09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc not found: ID does not exist" containerID="09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.617082 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc"} err="failed to get container status \"09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc\": rpc error: code = NotFound desc = could not find container \"09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc\": container with ID starting with 09b8b2e0cd1e7398fcf3fc944ad63a12999b8e5264a77502e3ea1ef839eb09dc not found: ID does not exist" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.670528 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" (UID: "6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.710019 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.710042 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9xfr\" (UniqueName: \"kubernetes.io/projected/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-kube-api-access-z9xfr\") on node \"crc\" DevicePath \"\"" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.710052 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.881267 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qn59p"] Nov 25 14:40:47 crc kubenswrapper[4796]: I1125 14:40:47.887335 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qn59p"] Nov 25 14:40:48 crc kubenswrapper[4796]: I1125 14:40:48.416661 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" path="/var/lib/kubelet/pods/6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5/volumes" Nov 25 14:40:49 crc kubenswrapper[4796]: I1125 14:40:49.514033 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:40:49 crc kubenswrapper[4796]: I1125 14:40:49.514105 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:40:49 crc kubenswrapper[4796]: I1125 14:40:49.514158 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:40:49 crc kubenswrapper[4796]: I1125 14:40:49.514858 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ee44c58110c9589c1b970cfd7a594ab20931e9987e3c25566f0b8fb802d3fc7"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:40:49 crc kubenswrapper[4796]: I1125 14:40:49.514932 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://6ee44c58110c9589c1b970cfd7a594ab20931e9987e3c25566f0b8fb802d3fc7" gracePeriod=600 Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.025981 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f"] Nov 25 14:40:50 crc kubenswrapper[4796]: E1125 14:40:50.026423 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerName="registry-server" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.026434 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerName="registry-server" Nov 25 14:40:50 crc kubenswrapper[4796]: E1125 14:40:50.026449 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2066409e-fcc1-4c0d-a581-97541778b188" containerName="registry-server" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.026454 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2066409e-fcc1-4c0d-a581-97541778b188" containerName="registry-server" Nov 25 14:40:50 crc kubenswrapper[4796]: E1125 14:40:50.026465 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerName="extract-utilities" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.026472 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerName="extract-utilities" Nov 25 14:40:50 crc kubenswrapper[4796]: E1125 14:40:50.026483 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2066409e-fcc1-4c0d-a581-97541778b188" containerName="extract-utilities" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.026488 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2066409e-fcc1-4c0d-a581-97541778b188" containerName="extract-utilities" Nov 25 14:40:50 crc kubenswrapper[4796]: E1125 14:40:50.026496 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerName="extract-content" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.026502 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerName="extract-content" Nov 25 14:40:50 crc kubenswrapper[4796]: E1125 14:40:50.026513 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2066409e-fcc1-4c0d-a581-97541778b188" containerName="extract-content" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.026519 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2066409e-fcc1-4c0d-a581-97541778b188" containerName="extract-content" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.026647 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2066409e-fcc1-4c0d-a581-97541778b188" containerName="registry-server" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.026659 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb4aa15-4a38-4f7c-8f9c-4218364f1dd5" containerName="registry-server" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.027217 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.029826 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k789f" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.042421 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.043690 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.047124 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-kn75j" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.054278 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.058948 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.078376 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.079306 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.082643 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-n26g8" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.106331 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.113183 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.114197 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.115967 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-x8lj8" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.126829 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.141350 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.144232 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjr6\" (UniqueName: \"kubernetes.io/projected/efaf4581-131a-496d-ba2f-75db34748600-kube-api-access-xsjr6\") pod \"designate-operator-controller-manager-7d695c9b56-47sfh\" (UID: \"efaf4581-131a-496d-ba2f-75db34748600\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.144301 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvbq\" (UniqueName: \"kubernetes.io/projected/3472c0d0-0763-4342-83cb-5b7a44e5b2e0-kube-api-access-rpvbq\") pod \"barbican-operator-controller-manager-86dc4d89c8-7q45f\" (UID: \"3472c0d0-0763-4342-83cb-5b7a44e5b2e0\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.144344 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb6vt\" (UniqueName: \"kubernetes.io/projected/ed513bf3-e75f-40b3-814e-508f4d9e9ce6-kube-api-access-wb6vt\") pod \"glance-operator-controller-manager-68b95954c9-nfdb6\" (UID: \"ed513bf3-e75f-40b3-814e-508f4d9e9ce6\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.144367 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvmk7\" (UniqueName: \"kubernetes.io/projected/4f74b624-2ef6-4289-8cb1-8d6babc260f5-kube-api-access-dvmk7\") pod \"heat-operator-controller-manager-774b86978c-w8gkv\" (UID: \"4f74b624-2ef6-4289-8cb1-8d6babc260f5\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.144389 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klktj\" (UniqueName: \"kubernetes.io/projected/3a3976ed-e631-4fda-9b60-1e4b62992c70-kube-api-access-klktj\") pod \"cinder-operator-controller-manager-79856dc55c-4w4wl\" (UID: \"3a3976ed-e631-4fda-9b60-1e4b62992c70\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.147136 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.147194 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.148637 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.153349 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vm82p" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.153640 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-knzqh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.205308 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.229327 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.231488 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.253523 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.253824 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-l2nl4" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.257477 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfld7\" (UniqueName: \"kubernetes.io/projected/5e82891b-b135-4f6a-8341-7ae6efb7d7ab-kube-api-access-nfld7\") pod \"horizon-operator-controller-manager-68c9694994-7ljk7\" (UID: \"5e82891b-b135-4f6a-8341-7ae6efb7d7ab\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.257531 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsjr6\" (UniqueName: \"kubernetes.io/projected/efaf4581-131a-496d-ba2f-75db34748600-kube-api-access-xsjr6\") pod \"designate-operator-controller-manager-7d695c9b56-47sfh\" (UID: \"efaf4581-131a-496d-ba2f-75db34748600\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.257562 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-tbmwj\" (UID: \"9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.258076 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpvbq\" (UniqueName: \"kubernetes.io/projected/3472c0d0-0763-4342-83cb-5b7a44e5b2e0-kube-api-access-rpvbq\") pod \"barbican-operator-controller-manager-86dc4d89c8-7q45f\" (UID: \"3472c0d0-0763-4342-83cb-5b7a44e5b2e0\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.258116 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrjc\" (UniqueName: \"kubernetes.io/projected/9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff-kube-api-access-lnrjc\") pod \"infra-operator-controller-manager-d5cc86f4b-tbmwj\" (UID: \"9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.258136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb6vt\" (UniqueName: \"kubernetes.io/projected/ed513bf3-e75f-40b3-814e-508f4d9e9ce6-kube-api-access-wb6vt\") pod \"glance-operator-controller-manager-68b95954c9-nfdb6\" (UID: \"ed513bf3-e75f-40b3-814e-508f4d9e9ce6\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.258155 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvmk7\" (UniqueName: \"kubernetes.io/projected/4f74b624-2ef6-4289-8cb1-8d6babc260f5-kube-api-access-dvmk7\") pod \"heat-operator-controller-manager-774b86978c-w8gkv\" (UID: \"4f74b624-2ef6-4289-8cb1-8d6babc260f5\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.258171 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klktj\" (UniqueName: \"kubernetes.io/projected/3a3976ed-e631-4fda-9b60-1e4b62992c70-kube-api-access-klktj\") pod \"cinder-operator-controller-manager-79856dc55c-4w4wl\" (UID: \"3a3976ed-e631-4fda-9b60-1e4b62992c70\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.282798 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.284091 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.284212 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.294957 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-q4s4s" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.320661 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.321420 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb6vt\" (UniqueName: \"kubernetes.io/projected/ed513bf3-e75f-40b3-814e-508f4d9e9ce6-kube-api-access-wb6vt\") pod \"glance-operator-controller-manager-68b95954c9-nfdb6\" (UID: \"ed513bf3-e75f-40b3-814e-508f4d9e9ce6\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.321444 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsjr6\" (UniqueName: \"kubernetes.io/projected/efaf4581-131a-496d-ba2f-75db34748600-kube-api-access-xsjr6\") pod \"designate-operator-controller-manager-7d695c9b56-47sfh\" (UID: \"efaf4581-131a-496d-ba2f-75db34748600\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.322612 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvmk7\" (UniqueName: \"kubernetes.io/projected/4f74b624-2ef6-4289-8cb1-8d6babc260f5-kube-api-access-dvmk7\") pod \"heat-operator-controller-manager-774b86978c-w8gkv\" (UID: \"4f74b624-2ef6-4289-8cb1-8d6babc260f5\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.326686 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klktj\" (UniqueName: \"kubernetes.io/projected/3a3976ed-e631-4fda-9b60-1e4b62992c70-kube-api-access-klktj\") pod \"cinder-operator-controller-manager-79856dc55c-4w4wl\" (UID: \"3a3976ed-e631-4fda-9b60-1e4b62992c70\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.328641 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.332226 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpvbq\" (UniqueName: \"kubernetes.io/projected/3472c0d0-0763-4342-83cb-5b7a44e5b2e0-kube-api-access-rpvbq\") pod \"barbican-operator-controller-manager-86dc4d89c8-7q45f\" (UID: \"3472c0d0-0763-4342-83cb-5b7a44e5b2e0\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.347803 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.347962 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.349174 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.350999 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-9wz9r" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.356340 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.357618 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.360170 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-7xwn5" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.360273 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.362325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfld7\" (UniqueName: \"kubernetes.io/projected/5e82891b-b135-4f6a-8341-7ae6efb7d7ab-kube-api-access-nfld7\") pod \"horizon-operator-controller-manager-68c9694994-7ljk7\" (UID: \"5e82891b-b135-4f6a-8341-7ae6efb7d7ab\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.362375 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-tbmwj\" (UID: \"9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.362410 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgbs\" (UniqueName: \"kubernetes.io/projected/f1937d85-62aa-4880-81ca-91d58ab2fba2-kube-api-access-dlgbs\") pod \"keystone-operator-controller-manager-748dc6576f-wqkh5\" (UID: \"f1937d85-62aa-4880-81ca-91d58ab2fba2\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.362434 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrjc\" (UniqueName: \"kubernetes.io/projected/9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff-kube-api-access-lnrjc\") pod \"infra-operator-controller-manager-d5cc86f4b-tbmwj\" (UID: \"9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.362483 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rf6k\" (UniqueName: \"kubernetes.io/projected/e5bf5c53-1a09-4635-9ebb-e2a6fb722e06-kube-api-access-7rf6k\") pod \"manila-operator-controller-manager-58bb8d67cc-v9j5d\" (UID: \"e5bf5c53-1a09-4635-9ebb-e2a6fb722e06\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.378448 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-tbmwj\" (UID: \"9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.384366 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.387264 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.390912 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5bjcb" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.397445 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfld7\" (UniqueName: \"kubernetes.io/projected/5e82891b-b135-4f6a-8341-7ae6efb7d7ab-kube-api-access-nfld7\") pod \"horizon-operator-controller-manager-68c9694994-7ljk7\" (UID: \"5e82891b-b135-4f6a-8341-7ae6efb7d7ab\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.400746 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrjc\" (UniqueName: \"kubernetes.io/projected/9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff-kube-api-access-lnrjc\") pod \"infra-operator-controller-manager-d5cc86f4b-tbmwj\" (UID: \"9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.402134 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.437915 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.454752 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.454795 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.455911 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.455934 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.456017 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.458469 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xwlk4" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.459182 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.461480 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.463361 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts28c\" (UniqueName: \"kubernetes.io/projected/c20eb9b8-4c87-4145-b550-e887fd680797-kube-api-access-ts28c\") pod \"ironic-operator-controller-manager-5bfcdc958c-7xmhd\" (UID: \"c20eb9b8-4c87-4145-b550-e887fd680797\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.463398 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt9zw\" (UniqueName: \"kubernetes.io/projected/7cda050e-831a-42f8-93f7-c33e10a8b119-kube-api-access-dt9zw\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-h96k8\" (UID: \"7cda050e-831a-42f8-93f7-c33e10a8b119\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.463379 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.463422 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwvdq\" (UniqueName: \"kubernetes.io/projected/b652a700-3131-4706-a300-c3f2c54519a3-kube-api-access-pwvdq\") pod \"neutron-operator-controller-manager-7c57c8bbc4-mfg66\" (UID: \"b652a700-3131-4706-a300-c3f2c54519a3\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.463471 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgbs\" (UniqueName: \"kubernetes.io/projected/f1937d85-62aa-4880-81ca-91d58ab2fba2-kube-api-access-dlgbs\") pod \"keystone-operator-controller-manager-748dc6576f-wqkh5\" (UID: \"f1937d85-62aa-4880-81ca-91d58ab2fba2\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.463528 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rf6k\" (UniqueName: \"kubernetes.io/projected/e5bf5c53-1a09-4635-9ebb-e2a6fb722e06-kube-api-access-7rf6k\") pod \"manila-operator-controller-manager-58bb8d67cc-v9j5d\" (UID: \"e5bf5c53-1a09-4635-9ebb-e2a6fb722e06\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.467432 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bwjn7" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.470772 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.471809 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.474153 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9cjvg" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.478457 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.481993 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgbs\" (UniqueName: \"kubernetes.io/projected/f1937d85-62aa-4880-81ca-91d58ab2fba2-kube-api-access-dlgbs\") pod \"keystone-operator-controller-manager-748dc6576f-wqkh5\" (UID: \"f1937d85-62aa-4880-81ca-91d58ab2fba2\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.482405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rf6k\" (UniqueName: \"kubernetes.io/projected/e5bf5c53-1a09-4635-9ebb-e2a6fb722e06-kube-api-access-7rf6k\") pod \"manila-operator-controller-manager-58bb8d67cc-v9j5d\" (UID: \"e5bf5c53-1a09-4635-9ebb-e2a6fb722e06\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.488874 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.504603 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.527257 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.528878 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.535051 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-wxdtj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.556326 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.564266 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts28c\" (UniqueName: \"kubernetes.io/projected/c20eb9b8-4c87-4145-b550-e887fd680797-kube-api-access-ts28c\") pod \"ironic-operator-controller-manager-5bfcdc958c-7xmhd\" (UID: \"c20eb9b8-4c87-4145-b550-e887fd680797\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.564302 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt9zw\" (UniqueName: \"kubernetes.io/projected/7cda050e-831a-42f8-93f7-c33e10a8b119-kube-api-access-dt9zw\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-h96k8\" (UID: \"7cda050e-831a-42f8-93f7-c33e10a8b119\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.564331 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwvdq\" (UniqueName: \"kubernetes.io/projected/b652a700-3131-4706-a300-c3f2c54519a3-kube-api-access-pwvdq\") pod \"neutron-operator-controller-manager-7c57c8bbc4-mfg66\" (UID: \"b652a700-3131-4706-a300-c3f2c54519a3\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.564368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh42f\" (UniqueName: \"kubernetes.io/projected/4e72b995-27a7-4777-9d17-7b04a3933074-kube-api-access-mh42f\") pod \"octavia-operator-controller-manager-fd75fd47d-jsccj\" (UID: \"4e72b995-27a7-4777-9d17-7b04a3933074\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.564416 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6px\" (UniqueName: \"kubernetes.io/projected/2d798aaf-7f02-472d-a5c9-53853ce7b2a4-kube-api-access-kk6px\") pod \"ovn-operator-controller-manager-66cf5c67ff-jg56b\" (UID: \"2d798aaf-7f02-472d-a5c9-53853ce7b2a4\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.564441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8m47\" (UniqueName: \"kubernetes.io/projected/5575133b-4226-4a90-b484-aeb1bbcb4dde-kube-api-access-q8m47\") pod \"nova-operator-controller-manager-79556f57fc-7c6bw\" (UID: \"5575133b-4226-4a90-b484-aeb1bbcb4dde\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.571955 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.573783 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.576010 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.577181 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.577404 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dnzm7" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.585821 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.587106 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.587221 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.594012 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwvdq\" (UniqueName: \"kubernetes.io/projected/b652a700-3131-4706-a300-c3f2c54519a3-kube-api-access-pwvdq\") pod \"neutron-operator-controller-manager-7c57c8bbc4-mfg66\" (UID: \"b652a700-3131-4706-a300-c3f2c54519a3\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.594269 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fnhlv" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.597730 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt9zw\" (UniqueName: \"kubernetes.io/projected/7cda050e-831a-42f8-93f7-c33e10a8b119-kube-api-access-dt9zw\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-h96k8\" (UID: \"7cda050e-831a-42f8-93f7-c33e10a8b119\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.605672 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.606774 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.611765 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts28c\" (UniqueName: \"kubernetes.io/projected/c20eb9b8-4c87-4145-b550-e887fd680797-kube-api-access-ts28c\") pod \"ironic-operator-controller-manager-5bfcdc958c-7xmhd\" (UID: \"c20eb9b8-4c87-4145-b550-e887fd680797\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.621713 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dhtz8" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.639191 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="6ee44c58110c9589c1b970cfd7a594ab20931e9987e3c25566f0b8fb802d3fc7" exitCode=0 Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.639304 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"6ee44c58110c9589c1b970cfd7a594ab20931e9987e3c25566f0b8fb802d3fc7"} Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.639561 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"8401d6a31e01e755468ba5162268a2636f0971d1abaa35f12c5126e3ee6beb3c"} Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.639601 4796 scope.go:117] "RemoveContainer" containerID="51ad5aaaaec69282af8ba8d3ab0515dc687f8d212c22650d81fdfbfdba4b24a5" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.662091 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.665354 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh42f\" (UniqueName: \"kubernetes.io/projected/4e72b995-27a7-4777-9d17-7b04a3933074-kube-api-access-mh42f\") pod \"octavia-operator-controller-manager-fd75fd47d-jsccj\" (UID: \"4e72b995-27a7-4777-9d17-7b04a3933074\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.666740 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6px\" (UniqueName: \"kubernetes.io/projected/2d798aaf-7f02-472d-a5c9-53853ce7b2a4-kube-api-access-kk6px\") pod \"ovn-operator-controller-manager-66cf5c67ff-jg56b\" (UID: \"2d798aaf-7f02-472d-a5c9-53853ce7b2a4\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.666774 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8m47\" (UniqueName: \"kubernetes.io/projected/5575133b-4226-4a90-b484-aeb1bbcb4dde-kube-api-access-q8m47\") pod \"nova-operator-controller-manager-79556f57fc-7c6bw\" (UID: \"5575133b-4226-4a90-b484-aeb1bbcb4dde\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.691949 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh42f\" (UniqueName: \"kubernetes.io/projected/4e72b995-27a7-4777-9d17-7b04a3933074-kube-api-access-mh42f\") pod \"octavia-operator-controller-manager-fd75fd47d-jsccj\" (UID: \"4e72b995-27a7-4777-9d17-7b04a3933074\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.696605 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.707726 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8m47\" (UniqueName: \"kubernetes.io/projected/5575133b-4226-4a90-b484-aeb1bbcb4dde-kube-api-access-q8m47\") pod \"nova-operator-controller-manager-79556f57fc-7c6bw\" (UID: \"5575133b-4226-4a90-b484-aeb1bbcb4dde\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.711641 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.716538 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6px\" (UniqueName: \"kubernetes.io/projected/2d798aaf-7f02-472d-a5c9-53853ce7b2a4-kube-api-access-kk6px\") pod \"ovn-operator-controller-manager-66cf5c67ff-jg56b\" (UID: \"2d798aaf-7f02-472d-a5c9-53853ce7b2a4\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.721343 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.742755 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.743069 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.744481 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.749534 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7w8rq" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.758299 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.773610 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggsf6\" (UniqueName: \"kubernetes.io/projected/217cf053-2a6e-4fbd-8544-830952c6c803-kube-api-access-ggsf6\") pod \"placement-operator-controller-manager-5db546f9d9-z7q4q\" (UID: \"217cf053-2a6e-4fbd-8544-830952c6c803\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.773739 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr5mc\" (UniqueName: \"kubernetes.io/projected/5871d7ea-743f-4b9b-9d49-e02f51222ea7-kube-api-access-mr5mc\") pod \"swift-operator-controller-manager-6fdc4fcf86-6rrmf\" (UID: \"5871d7ea-743f-4b9b-9d49-e02f51222ea7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.773776 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/399a4df5-120a-40fc-9570-4555ab767e70-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh\" (UID: \"399a4df5-120a-40fc-9570-4555ab767e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.773832 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s4qr\" (UniqueName: \"kubernetes.io/projected/399a4df5-120a-40fc-9570-4555ab767e70-kube-api-access-9s4qr\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh\" (UID: \"399a4df5-120a-40fc-9570-4555ab767e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.774505 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.784688 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.801118 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.801906 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.810756 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.829422 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.836074 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.844535 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cjqk8" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.869402 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.876250 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr5mc\" (UniqueName: \"kubernetes.io/projected/5871d7ea-743f-4b9b-9d49-e02f51222ea7-kube-api-access-mr5mc\") pod \"swift-operator-controller-manager-6fdc4fcf86-6rrmf\" (UID: \"5871d7ea-743f-4b9b-9d49-e02f51222ea7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.876304 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/399a4df5-120a-40fc-9570-4555ab767e70-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh\" (UID: \"399a4df5-120a-40fc-9570-4555ab767e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.876347 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s4qr\" (UniqueName: \"kubernetes.io/projected/399a4df5-120a-40fc-9570-4555ab767e70-kube-api-access-9s4qr\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh\" (UID: \"399a4df5-120a-40fc-9570-4555ab767e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.876376 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggsf6\" (UniqueName: \"kubernetes.io/projected/217cf053-2a6e-4fbd-8544-830952c6c803-kube-api-access-ggsf6\") pod \"placement-operator-controller-manager-5db546f9d9-z7q4q\" (UID: \"217cf053-2a6e-4fbd-8544-830952c6c803\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.876420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsq8r\" (UniqueName: \"kubernetes.io/projected/bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e-kube-api-access-bsq8r\") pod \"telemetry-operator-controller-manager-567f98c9d-2v9lc\" (UID: \"bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" Nov 25 14:40:50 crc kubenswrapper[4796]: E1125 14:40:50.876852 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 14:40:50 crc kubenswrapper[4796]: E1125 14:40:50.876890 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/399a4df5-120a-40fc-9570-4555ab767e70-cert podName:399a4df5-120a-40fc-9570-4555ab767e70 nodeName:}" failed. No retries permitted until 2025-11-25 14:40:51.376876579 +0000 UTC m=+979.719986003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/399a4df5-120a-40fc-9570-4555ab767e70-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" (UID: "399a4df5-120a-40fc-9570-4555ab767e70") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.880458 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.894138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggsf6\" (UniqueName: \"kubernetes.io/projected/217cf053-2a6e-4fbd-8544-830952c6c803-kube-api-access-ggsf6\") pod \"placement-operator-controller-manager-5db546f9d9-z7q4q\" (UID: \"217cf053-2a6e-4fbd-8544-830952c6c803\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.903875 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s4qr\" (UniqueName: \"kubernetes.io/projected/399a4df5-120a-40fc-9570-4555ab767e70-kube-api-access-9s4qr\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh\" (UID: \"399a4df5-120a-40fc-9570-4555ab767e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.909753 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.923266 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr5mc\" (UniqueName: \"kubernetes.io/projected/5871d7ea-743f-4b9b-9d49-e02f51222ea7-kube-api-access-mr5mc\") pod \"swift-operator-controller-manager-6fdc4fcf86-6rrmf\" (UID: \"5871d7ea-743f-4b9b-9d49-e02f51222ea7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.931684 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-99zgm"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.932937 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.937100 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-98dz7" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.941259 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.977859 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5cr\" (UniqueName: \"kubernetes.io/projected/312c47f9-34dd-4416-b396-fd4f9855e72e-kube-api-access-bv5cr\") pod \"watcher-operator-controller-manager-864885998-99zgm\" (UID: \"312c47f9-34dd-4416-b396-fd4f9855e72e\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.978019 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsq8r\" (UniqueName: \"kubernetes.io/projected/bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e-kube-api-access-bsq8r\") pod \"telemetry-operator-controller-manager-567f98c9d-2v9lc\" (UID: \"bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.978058 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r74f2\" (UniqueName: \"kubernetes.io/projected/dba98963-8ddb-46d0-a6a7-62f337d6d520-kube-api-access-r74f2\") pod \"test-operator-controller-manager-5cb74df96-6bbxk\" (UID: \"dba98963-8ddb-46d0-a6a7-62f337d6d520\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.979198 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-99zgm"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.993654 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.994518 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx"] Nov 25 14:40:50 crc kubenswrapper[4796]: I1125 14:40:50.994616 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:50.999924 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.000123 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.004784 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7vbj7" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.030226 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsq8r\" (UniqueName: \"kubernetes.io/projected/bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e-kube-api-access-bsq8r\") pod \"telemetry-operator-controller-manager-567f98c9d-2v9lc\" (UID: \"bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.034362 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n"] Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.035311 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.042700 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hcbrh" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.077627 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n"] Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.078942 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drwjl\" (UniqueName: \"kubernetes.io/projected/833cc3da-1e55-4b00-9766-5bc81f81a506-kube-api-access-drwjl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7b78n\" (UID: \"833cc3da-1e55-4b00-9766-5bc81f81a506\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.078991 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6pxp\" (UniqueName: \"kubernetes.io/projected/909ee785-5087-4b08-9590-10993e0fdeba-kube-api-access-v6pxp\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.079023 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r74f2\" (UniqueName: \"kubernetes.io/projected/dba98963-8ddb-46d0-a6a7-62f337d6d520-kube-api-access-r74f2\") pod \"test-operator-controller-manager-5cb74df96-6bbxk\" (UID: \"dba98963-8ddb-46d0-a6a7-62f337d6d520\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.079072 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv5cr\" (UniqueName: \"kubernetes.io/projected/312c47f9-34dd-4416-b396-fd4f9855e72e-kube-api-access-bv5cr\") pod \"watcher-operator-controller-manager-864885998-99zgm\" (UID: \"312c47f9-34dd-4416-b396-fd4f9855e72e\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.079121 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.079163 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.115526 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv5cr\" (UniqueName: \"kubernetes.io/projected/312c47f9-34dd-4416-b396-fd4f9855e72e-kube-api-access-bv5cr\") pod \"watcher-operator-controller-manager-864885998-99zgm\" (UID: \"312c47f9-34dd-4416-b396-fd4f9855e72e\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.121300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r74f2\" (UniqueName: \"kubernetes.io/projected/dba98963-8ddb-46d0-a6a7-62f337d6d520-kube-api-access-r74f2\") pod \"test-operator-controller-manager-5cb74df96-6bbxk\" (UID: \"dba98963-8ddb-46d0-a6a7-62f337d6d520\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.148669 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.180805 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drwjl\" (UniqueName: \"kubernetes.io/projected/833cc3da-1e55-4b00-9766-5bc81f81a506-kube-api-access-drwjl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7b78n\" (UID: \"833cc3da-1e55-4b00-9766-5bc81f81a506\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.180848 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6pxp\" (UniqueName: \"kubernetes.io/projected/909ee785-5087-4b08-9590-10993e0fdeba-kube-api-access-v6pxp\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.180911 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.180943 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.181058 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.181099 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs podName:909ee785-5087-4b08-9590-10993e0fdeba nodeName:}" failed. No retries permitted until 2025-11-25 14:40:51.681084491 +0000 UTC m=+980.024193915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs") pod "openstack-operator-controller-manager-77bf44fb75-9sjgx" (UID: "909ee785-5087-4b08-9590-10993e0fdeba") : secret "metrics-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.182029 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.182112 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs podName:909ee785-5087-4b08-9590-10993e0fdeba nodeName:}" failed. No retries permitted until 2025-11-25 14:40:51.682092201 +0000 UTC m=+980.025201625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs") pod "openstack-operator-controller-manager-77bf44fb75-9sjgx" (UID: "909ee785-5087-4b08-9590-10993e0fdeba") : secret "webhook-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.210971 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drwjl\" (UniqueName: \"kubernetes.io/projected/833cc3da-1e55-4b00-9766-5bc81f81a506-kube-api-access-drwjl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7b78n\" (UID: \"833cc3da-1e55-4b00-9766-5bc81f81a506\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.211536 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6pxp\" (UniqueName: \"kubernetes.io/projected/909ee785-5087-4b08-9590-10993e0fdeba-kube-api-access-v6pxp\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.277464 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.364210 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f"] Nov 25 14:40:51 crc kubenswrapper[4796]: W1125 14:40:51.367521 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3472c0d0_0763_4342_83cb_5b7a44e5b2e0.slice/crio-eccabd2ef4c893e8acabc8b0257999ace2d9070063866f0f5ed125fbfe1d528b WatchSource:0}: Error finding container eccabd2ef4c893e8acabc8b0257999ace2d9070063866f0f5ed125fbfe1d528b: Status 404 returned error can't find the container with id eccabd2ef4c893e8acabc8b0257999ace2d9070063866f0f5ed125fbfe1d528b Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.385766 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/399a4df5-120a-40fc-9570-4555ab767e70-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh\" (UID: \"399a4df5-120a-40fc-9570-4555ab767e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.385937 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.385995 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/399a4df5-120a-40fc-9570-4555ab767e70-cert podName:399a4df5-120a-40fc-9570-4555ab767e70 nodeName:}" failed. No retries permitted until 2025-11-25 14:40:52.385977576 +0000 UTC m=+980.729087000 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/399a4df5-120a-40fc-9570-4555ab767e70-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" (UID: "399a4df5-120a-40fc-9570-4555ab767e70") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.406262 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.507659 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.531892 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl"] Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.647245 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" event={"ID":"3a3976ed-e631-4fda-9b60-1e4b62992c70","Type":"ContainerStarted","Data":"ed1f383d0d991ba9fc9039bd94d1704536a7f3eaa22b4f446316d885390e9d88"} Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.658027 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" event={"ID":"3472c0d0-0763-4342-83cb-5b7a44e5b2e0","Type":"ContainerStarted","Data":"eccabd2ef4c893e8acabc8b0257999ace2d9070063866f0f5ed125fbfe1d528b"} Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.693970 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.694046 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.694140 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.694183 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.694214 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs podName:909ee785-5087-4b08-9590-10993e0fdeba nodeName:}" failed. No retries permitted until 2025-11-25 14:40:52.694196442 +0000 UTC m=+981.037305876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs") pod "openstack-operator-controller-manager-77bf44fb75-9sjgx" (UID: "909ee785-5087-4b08-9590-10993e0fdeba") : secret "webhook-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: E1125 14:40:51.694237 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs podName:909ee785-5087-4b08-9590-10993e0fdeba nodeName:}" failed. No retries permitted until 2025-11-25 14:40:52.694226233 +0000 UTC m=+981.037335657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs") pod "openstack-operator-controller-manager-77bf44fb75-9sjgx" (UID: "909ee785-5087-4b08-9590-10993e0fdeba") : secret "metrics-server-cert" not found Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.811644 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh"] Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.817743 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj"] Nov 25 14:40:51 crc kubenswrapper[4796]: W1125 14:40:51.820784 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefaf4581_131a_496d_ba2f_75db34748600.slice/crio-c24379da895eaf8b51b7ccb33e9fcbf13a02971b0f8ccc522782aee6c29c6012 WatchSource:0}: Error finding container c24379da895eaf8b51b7ccb33e9fcbf13a02971b0f8ccc522782aee6c29c6012: Status 404 returned error can't find the container with id c24379da895eaf8b51b7ccb33e9fcbf13a02971b0f8ccc522782aee6c29c6012 Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.882884 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv"] Nov 25 14:40:51 crc kubenswrapper[4796]: W1125 14:40:51.907935 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217cf053_2a6e_4fbd_8544_830952c6c803.slice/crio-a5505101276a8979c7be87edce784d1b8c468598d73fa30b7e7beaee1eb58ee7 WatchSource:0}: Error finding container a5505101276a8979c7be87edce784d1b8c468598d73fa30b7e7beaee1eb58ee7: Status 404 returned error can't find the container with id a5505101276a8979c7be87edce784d1b8c468598d73fa30b7e7beaee1eb58ee7 Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.919907 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q"] Nov 25 14:40:51 crc kubenswrapper[4796]: W1125 14:40:51.927476 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc20eb9b8_4c87_4145_b550_e887fd680797.slice/crio-1e7a0aa88b2852ae5f5ac42b21295a26dd5b2701c93dfbe7776ca4aa4136957d WatchSource:0}: Error finding container 1e7a0aa88b2852ae5f5ac42b21295a26dd5b2701c93dfbe7776ca4aa4136957d: Status 404 returned error can't find the container with id 1e7a0aa88b2852ae5f5ac42b21295a26dd5b2701c93dfbe7776ca4aa4136957d Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.927478 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6"] Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.933501 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd"] Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.943176 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5"] Nov 25 14:40:51 crc kubenswrapper[4796]: I1125 14:40:51.949935 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7"] Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.188684 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw"] Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.203202 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf"] Nov 25 14:40:52 crc kubenswrapper[4796]: W1125 14:40:52.204929 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod312c47f9_34dd_4416_b396_fd4f9855e72e.slice/crio-456cdf6eb2f6fbc1b500e83ccb7885ab9d0911fafdae9f78c01eb260ead316ec WatchSource:0}: Error finding container 456cdf6eb2f6fbc1b500e83ccb7885ab9d0911fafdae9f78c01eb260ead316ec: Status 404 returned error can't find the container with id 456cdf6eb2f6fbc1b500e83ccb7885ab9d0911fafdae9f78c01eb260ead316ec Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.209319 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-99zgm"] Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.214289 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d"] Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.219495 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66"] Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.226636 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b"] Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.231506 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8"] Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.239185 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q8m47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-7c6bw_openstack-operators(5575133b-4226-4a90-b484-aeb1bbcb4dde): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.240526 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pwvdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-mfg66_openstack-operators(b652a700-3131-4706-a300-c3f2c54519a3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.242865 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pwvdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-mfg66_openstack-operators(b652a700-3131-4706-a300-c3f2c54519a3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.242960 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mh42f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-jsccj_openstack-operators(4e72b995-27a7-4777-9d17-7b04a3933074): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.243072 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q8m47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-7c6bw_openstack-operators(5575133b-4226-4a90-b484-aeb1bbcb4dde): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.245224 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" podUID="5575133b-4226-4a90-b484-aeb1bbcb4dde" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.245288 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" podUID="b652a700-3131-4706-a300-c3f2c54519a3" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.247046 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mh42f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-jsccj_openstack-operators(4e72b995-27a7-4777-9d17-7b04a3933074): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.248611 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" podUID="4e72b995-27a7-4777-9d17-7b04a3933074" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.257709 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj"] Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.282083 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk"] Nov 25 14:40:52 crc kubenswrapper[4796]: W1125 14:40:52.282766 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d798aaf_7f02_472d_a5c9_53853ce7b2a4.slice/crio-2401b94d302f720ff794d385412bce4db844ddfe79bb743c7f8b4abcc5f65995 WatchSource:0}: Error finding container 2401b94d302f720ff794d385412bce4db844ddfe79bb743c7f8b4abcc5f65995: Status 404 returned error can't find the container with id 2401b94d302f720ff794d385412bce4db844ddfe79bb743c7f8b4abcc5f65995 Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.286097 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kk6px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-jg56b_openstack-operators(2d798aaf-7f02-472d-a5c9-53853ce7b2a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.287815 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n"] Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.288714 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kk6px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-jg56b_openstack-operators(2d798aaf-7f02-472d-a5c9-53853ce7b2a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.290092 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" podUID="2d798aaf-7f02-472d-a5c9-53853ce7b2a4" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.291538 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc"] Nov 25 14:40:52 crc kubenswrapper[4796]: W1125 14:40:52.324816 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod833cc3da_1e55_4b00_9766_5bc81f81a506.slice/crio-608c8b8dc50b96c72d50c3aaf6c2ed49e7d0e42e8d284cab01b6f044985b4296 WatchSource:0}: Error finding container 608c8b8dc50b96c72d50c3aaf6c2ed49e7d0e42e8d284cab01b6f044985b4296: Status 404 returned error can't find the container with id 608c8b8dc50b96c72d50c3aaf6c2ed49e7d0e42e8d284cab01b6f044985b4296 Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.329188 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r74f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-6bbxk_openstack-operators(dba98963-8ddb-46d0-a6a7-62f337d6d520): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.329326 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drwjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7b78n_openstack-operators(833cc3da-1e55-4b00-9766-5bc81f81a506): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.330540 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" podUID="833cc3da-1e55-4b00-9766-5bc81f81a506" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.331019 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r74f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-6bbxk_openstack-operators(dba98963-8ddb-46d0-a6a7-62f337d6d520): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.331144 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bsq8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-2v9lc_openstack-operators(bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.332508 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" podUID="dba98963-8ddb-46d0-a6a7-62f337d6d520" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.333487 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bsq8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-2v9lc_openstack-operators(bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.335510 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" podUID="bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.411850 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/399a4df5-120a-40fc-9570-4555ab767e70-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh\" (UID: \"399a4df5-120a-40fc-9570-4555ab767e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.417297 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/399a4df5-120a-40fc-9570-4555ab767e70-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh\" (UID: \"399a4df5-120a-40fc-9570-4555ab767e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.680074 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" event={"ID":"5575133b-4226-4a90-b484-aeb1bbcb4dde","Type":"ContainerStarted","Data":"100e8b3f4a5154e8012b3fbe655203c2c7cba3ac9ddff540d036ff536f973d5b"} Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.682753 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" podUID="5575133b-4226-4a90-b484-aeb1bbcb4dde" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.683535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" event={"ID":"f1937d85-62aa-4880-81ca-91d58ab2fba2","Type":"ContainerStarted","Data":"91b08d050c50e603e3590599f2aa67819f0b71d5380061123f7cfbc8c66acf2d"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.685193 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" event={"ID":"312c47f9-34dd-4416-b396-fd4f9855e72e","Type":"ContainerStarted","Data":"456cdf6eb2f6fbc1b500e83ccb7885ab9d0911fafdae9f78c01eb260ead316ec"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.686127 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" event={"ID":"4f74b624-2ef6-4289-8cb1-8d6babc260f5","Type":"ContainerStarted","Data":"2134308c99b0cb5c400a96945a33e175fded45bb0d5232db77300c0b533e3f5b"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.687449 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" event={"ID":"7cda050e-831a-42f8-93f7-c33e10a8b119","Type":"ContainerStarted","Data":"580fb09462cc0a61ae09f55b3efda7631aef788d168c0ac3f5f0e5fabc744c06"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.692432 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" event={"ID":"dba98963-8ddb-46d0-a6a7-62f337d6d520","Type":"ContainerStarted","Data":"4af529f092a9e386daaa6567f00a0cacf4e6d73f22019f133445b560d2c3d82f"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.693995 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" event={"ID":"e5bf5c53-1a09-4635-9ebb-e2a6fb722e06","Type":"ContainerStarted","Data":"81b2576f499ce716ea568edc2bb9353c147986273cda1413376356cedd72795f"} Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.695527 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" podUID="dba98963-8ddb-46d0-a6a7-62f337d6d520" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.695750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" event={"ID":"ed513bf3-e75f-40b3-814e-508f4d9e9ce6","Type":"ContainerStarted","Data":"0a3dfd61b3d6a1e85b0146b10527ca4492c0be029593a345e03efe70cb524f5a"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.697551 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" event={"ID":"5e82891b-b135-4f6a-8341-7ae6efb7d7ab","Type":"ContainerStarted","Data":"232397172b30959ad1cbdbe4a463f312784053ce1362f1c5c038d44fec689215"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.700829 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" event={"ID":"c20eb9b8-4c87-4145-b550-e887fd680797","Type":"ContainerStarted","Data":"1e7a0aa88b2852ae5f5ac42b21295a26dd5b2701c93dfbe7776ca4aa4136957d"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.701688 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" event={"ID":"833cc3da-1e55-4b00-9766-5bc81f81a506","Type":"ContainerStarted","Data":"608c8b8dc50b96c72d50c3aaf6c2ed49e7d0e42e8d284cab01b6f044985b4296"} Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.702758 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" podUID="833cc3da-1e55-4b00-9766-5bc81f81a506" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.703059 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" event={"ID":"b652a700-3131-4706-a300-c3f2c54519a3","Type":"ContainerStarted","Data":"f109f86a76004da6cb2ec463bd238f8c9bc4e8bee87dcd9821552e32c6807290"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.704009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" event={"ID":"2d798aaf-7f02-472d-a5c9-53853ce7b2a4","Type":"ContainerStarted","Data":"2401b94d302f720ff794d385412bce4db844ddfe79bb743c7f8b4abcc5f65995"} Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.704504 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" podUID="b652a700-3131-4706-a300-c3f2c54519a3" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.705420 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" event={"ID":"bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e","Type":"ContainerStarted","Data":"02cb859f4ee8417ec23beb52b8086bddf626c21594e004649a5496040ab166be"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.706180 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" event={"ID":"217cf053-2a6e-4fbd-8544-830952c6c803","Type":"ContainerStarted","Data":"a5505101276a8979c7be87edce784d1b8c468598d73fa30b7e7beaee1eb58ee7"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.706894 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.707733 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" event={"ID":"efaf4581-131a-496d-ba2f-75db34748600","Type":"ContainerStarted","Data":"c24379da895eaf8b51b7ccb33e9fcbf13a02971b0f8ccc522782aee6c29c6012"} Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.709216 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" podUID="bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.709257 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" event={"ID":"9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff","Type":"ContainerStarted","Data":"1a55ec39baa70eb8970bca717796ce321c067bec5c1f56adfc22308885b4584f"} Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.709420 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" podUID="2d798aaf-7f02-472d-a5c9-53853ce7b2a4" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.712113 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" event={"ID":"5871d7ea-743f-4b9b-9d49-e02f51222ea7","Type":"ContainerStarted","Data":"b8881d9e68de458334a31291460ae293a7065bc0e28336e9617c86855659a845"} Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.714233 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" event={"ID":"4e72b995-27a7-4777-9d17-7b04a3933074","Type":"ContainerStarted","Data":"30b1c10eea3d3326ecdfe04c1ef90f0b7c2df83a3aeceaf5e7d88ee4700af6a1"} Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.716330 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" podUID="4e72b995-27a7-4777-9d17-7b04a3933074" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.716529 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.716649 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs podName:909ee785-5087-4b08-9590-10993e0fdeba nodeName:}" failed. No retries permitted until 2025-11-25 14:40:54.716619877 +0000 UTC m=+983.059729291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs") pod "openstack-operator-controller-manager-77bf44fb75-9sjgx" (UID: "909ee785-5087-4b08-9590-10993e0fdeba") : secret "webhook-server-cert" not found Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.716260 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:52 crc kubenswrapper[4796]: I1125 14:40:52.717016 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.719331 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 14:40:52 crc kubenswrapper[4796]: E1125 14:40:52.719385 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs podName:909ee785-5087-4b08-9590-10993e0fdeba nodeName:}" failed. No retries permitted until 2025-11-25 14:40:54.719367362 +0000 UTC m=+983.062476786 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs") pod "openstack-operator-controller-manager-77bf44fb75-9sjgx" (UID: "909ee785-5087-4b08-9590-10993e0fdeba") : secret "metrics-server-cert" not found Nov 25 14:40:53 crc kubenswrapper[4796]: I1125 14:40:53.192257 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh"] Nov 25 14:40:53 crc kubenswrapper[4796]: W1125 14:40:53.200004 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod399a4df5_120a_40fc_9570_4555ab767e70.slice/crio-967995a93673ffe7efa332b5a7d23e5a1af0071ca811044e4905481b592d8077 WatchSource:0}: Error finding container 967995a93673ffe7efa332b5a7d23e5a1af0071ca811044e4905481b592d8077: Status 404 returned error can't find the container with id 967995a93673ffe7efa332b5a7d23e5a1af0071ca811044e4905481b592d8077 Nov 25 14:40:53 crc kubenswrapper[4796]: I1125 14:40:53.720625 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" event={"ID":"399a4df5-120a-40fc-9570-4555ab767e70","Type":"ContainerStarted","Data":"967995a93673ffe7efa332b5a7d23e5a1af0071ca811044e4905481b592d8077"} Nov 25 14:40:53 crc kubenswrapper[4796]: E1125 14:40:53.722699 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" podUID="833cc3da-1e55-4b00-9766-5bc81f81a506" Nov 25 14:40:53 crc kubenswrapper[4796]: E1125 14:40:53.723202 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" podUID="2d798aaf-7f02-472d-a5c9-53853ce7b2a4" Nov 25 14:40:53 crc kubenswrapper[4796]: E1125 14:40:53.723372 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" podUID="4e72b995-27a7-4777-9d17-7b04a3933074" Nov 25 14:40:53 crc kubenswrapper[4796]: E1125 14:40:53.723509 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" podUID="bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e" Nov 25 14:40:53 crc kubenswrapper[4796]: E1125 14:40:53.724385 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" podUID="5575133b-4226-4a90-b484-aeb1bbcb4dde" Nov 25 14:40:53 crc kubenswrapper[4796]: E1125 14:40:53.725051 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" podUID="dba98963-8ddb-46d0-a6a7-62f337d6d520" Nov 25 14:40:53 crc kubenswrapper[4796]: E1125 14:40:53.725703 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" podUID="b652a700-3131-4706-a300-c3f2c54519a3" Nov 25 14:40:54 crc kubenswrapper[4796]: I1125 14:40:54.751729 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:54 crc kubenswrapper[4796]: I1125 14:40:54.752111 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:54 crc kubenswrapper[4796]: I1125 14:40:54.764383 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-metrics-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:54 crc kubenswrapper[4796]: I1125 14:40:54.765758 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/909ee785-5087-4b08-9590-10993e0fdeba-webhook-certs\") pod \"openstack-operator-controller-manager-77bf44fb75-9sjgx\" (UID: \"909ee785-5087-4b08-9590-10993e0fdeba\") " pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:40:54 crc kubenswrapper[4796]: I1125 14:40:54.781454 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:41:03 crc kubenswrapper[4796]: E1125 14:41:03.933052 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 25 14:41:03 crc kubenswrapper[4796]: E1125 14:41:03.934115 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlgbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-wqkh5_openstack-operators(f1937d85-62aa-4880-81ca-91d58ab2fba2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:41:04 crc kubenswrapper[4796]: I1125 14:41:04.866602 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx"] Nov 25 14:41:08 crc kubenswrapper[4796]: E1125 14:41:08.204453 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 25 14:41:08 crc kubenswrapper[4796]: E1125 14:41:08.205121 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggsf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-z7q4q_openstack-operators(217cf053-2a6e-4fbd-8544-830952c6c803): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:41:08 crc kubenswrapper[4796]: E1125 14:41:08.842915 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 25 14:41:08 crc kubenswrapper[4796]: E1125 14:41:08.843305 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dt9zw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-h96k8_openstack-operators(7cda050e-831a-42f8-93f7-c33e10a8b119): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:41:10 crc kubenswrapper[4796]: W1125 14:41:10.863302 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod909ee785_5087_4b08_9590_10993e0fdeba.slice/crio-d484a624dd44d9e6bad6e96d4809ae8f69a85a69bacb5dc20a900bb4b21a4b99 WatchSource:0}: Error finding container d484a624dd44d9e6bad6e96d4809ae8f69a85a69bacb5dc20a900bb4b21a4b99: Status 404 returned error can't find the container with id d484a624dd44d9e6bad6e96d4809ae8f69a85a69bacb5dc20a900bb4b21a4b99 Nov 25 14:41:11 crc kubenswrapper[4796]: I1125 14:41:11.841824 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" event={"ID":"909ee785-5087-4b08-9590-10993e0fdeba","Type":"ContainerStarted","Data":"d484a624dd44d9e6bad6e96d4809ae8f69a85a69bacb5dc20a900bb4b21a4b99"} Nov 25 14:41:14 crc kubenswrapper[4796]: E1125 14:41:14.568243 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 25 14:41:14 crc kubenswrapper[4796]: E1125 14:41:14.568647 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bv5cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-99zgm_openstack-operators(312c47f9-34dd-4416-b396-fd4f9855e72e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:41:24 crc kubenswrapper[4796]: E1125 14:41:24.475438 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 25 14:41:24 crc kubenswrapper[4796]: E1125 14:41:24.476339 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r74f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-6bbxk_openstack-operators(dba98963-8ddb-46d0-a6a7-62f337d6d520): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:41:27 crc kubenswrapper[4796]: E1125 14:41:27.454735 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 25 14:41:27 crc kubenswrapper[4796]: E1125 14:41:27.455964 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pwvdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-mfg66_openstack-operators(b652a700-3131-4706-a300-c3f2c54519a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:41:28 crc kubenswrapper[4796]: E1125 14:41:28.657227 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 25 14:41:28 crc kubenswrapper[4796]: E1125 14:41:28.657717 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mh42f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-jsccj_openstack-operators(4e72b995-27a7-4777-9d17-7b04a3933074): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:41:29 crc kubenswrapper[4796]: E1125 14:41:29.411676 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 25 14:41:29 crc kubenswrapper[4796]: E1125 14:41:29.412179 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kk6px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-jg56b_openstack-operators(2d798aaf-7f02-472d-a5c9-53853ce7b2a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:41:33 crc kubenswrapper[4796]: I1125 14:41:33.992653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" event={"ID":"399a4df5-120a-40fc-9570-4555ab767e70","Type":"ContainerStarted","Data":"993e8fffa90079cdb5d016f56303ff56fc24996840a886c3dba4c8f0c6633f1b"} Nov 25 14:41:33 crc kubenswrapper[4796]: I1125 14:41:33.994919 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" event={"ID":"909ee785-5087-4b08-9590-10993e0fdeba","Type":"ContainerStarted","Data":"b22a41327eda46ffbf866afbd143af53ac9251ea753c2854fa69ad9636e9d701"} Nov 25 14:41:33 crc kubenswrapper[4796]: I1125 14:41:33.995208 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:41:33 crc kubenswrapper[4796]: I1125 14:41:33.998094 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" event={"ID":"4f74b624-2ef6-4289-8cb1-8d6babc260f5","Type":"ContainerStarted","Data":"c8c5dc12a7d13394b2ce90be664a8d9315bfed64423e54750d7d64b970518059"} Nov 25 14:41:34 crc kubenswrapper[4796]: I1125 14:41:33.999988 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" event={"ID":"3a3976ed-e631-4fda-9b60-1e4b62992c70","Type":"ContainerStarted","Data":"224d5f78081fb0d783dc503014f06df0a13c14c4d62c130bf9aa6d9cf93cd604"} Nov 25 14:41:34 crc kubenswrapper[4796]: I1125 14:41:34.001815 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" event={"ID":"ed513bf3-e75f-40b3-814e-508f4d9e9ce6","Type":"ContainerStarted","Data":"42c35c750b049d06d6b1bd48917827167f9c1e28b157d34b8757300818949cca"} Nov 25 14:41:34 crc kubenswrapper[4796]: I1125 14:41:34.036320 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" podStartSLOduration=44.036296574 podStartE2EDuration="44.036296574s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:41:34.028832682 +0000 UTC m=+1022.371942136" watchObservedRunningTime="2025-11-25 14:41:34.036296574 +0000 UTC m=+1022.379405998" Nov 25 14:41:34 crc kubenswrapper[4796]: E1125 14:41:34.753588 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rf6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-v9j5d_openstack-operators(e5bf5c53-1a09-4635-9ebb-e2a6fb722e06): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 14:41:34 crc kubenswrapper[4796]: E1125 14:41:34.754786 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" podUID="e5bf5c53-1a09-4635-9ebb-e2a6fb722e06" Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.017284 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" event={"ID":"5871d7ea-743f-4b9b-9d49-e02f51222ea7","Type":"ContainerStarted","Data":"f26123a2ca08022f0fc359fc7115c01daa849be27cdc8c3696b1d227378ee70e"} Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.019384 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" event={"ID":"c20eb9b8-4c87-4145-b550-e887fd680797","Type":"ContainerStarted","Data":"6ba6f7d2d36c338d19826206a593fd84721c30e0404b507707a7d0182b507c2a"} Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.021314 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" event={"ID":"efaf4581-131a-496d-ba2f-75db34748600","Type":"ContainerStarted","Data":"e4cebcceac35cfe7fe93aa93bb329cb7983188d211ea5267963a7c8eeafb9c55"} Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.022800 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" event={"ID":"5e82891b-b135-4f6a-8341-7ae6efb7d7ab","Type":"ContainerStarted","Data":"4a4106d23c20ff3b9cf7b481626459c20a723b6d029ec1a27775af286cb536b5"} Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.024216 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" event={"ID":"3472c0d0-0763-4342-83cb-5b7a44e5b2e0","Type":"ContainerStarted","Data":"c36ef2434498a331c59e37515b52a125a3ff7164ad504aa50f207377f77948a9"} Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.026344 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" event={"ID":"bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e","Type":"ContainerStarted","Data":"f654e117675a83db67e1b6fa5dfab49a1981b534c85eb82aaa816aaaa9255d6d"} Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.028189 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" event={"ID":"5575133b-4226-4a90-b484-aeb1bbcb4dde","Type":"ContainerStarted","Data":"8393898eb346273ac32da224642a2cb9a1fb61d0964be88c4ec4c61dd1f14bdd"} Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.029640 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" event={"ID":"e5bf5c53-1a09-4635-9ebb-e2a6fb722e06","Type":"ContainerStarted","Data":"1141d0da256579cd6f323411b8ba050ceb8ae7834db187450290af458485609f"} Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.029771 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" Nov 25 14:41:35 crc kubenswrapper[4796]: I1125 14:41:35.031365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" event={"ID":"9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff","Type":"ContainerStarted","Data":"952dd82c32b71536a0727a1ebb2656de784382a7be10963c2a0e3a43cf13037d"} Nov 25 14:41:35 crc kubenswrapper[4796]: E1125 14:41:35.031461 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" podUID="e5bf5c53-1a09-4635-9ebb-e2a6fb722e06" Nov 25 14:41:36 crc kubenswrapper[4796]: E1125 14:41:36.045728 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" podUID="e5bf5c53-1a09-4635-9ebb-e2a6fb722e06" Nov 25 14:41:37 crc kubenswrapper[4796]: E1125 14:41:37.594367 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 25 14:41:37 crc kubenswrapper[4796]: E1125 14:41:37.594752 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drwjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7b78n_openstack-operators(833cc3da-1e55-4b00-9766-5bc81f81a506): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:41:37 crc kubenswrapper[4796]: E1125 14:41:37.596110 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" podUID="833cc3da-1e55-4b00-9766-5bc81f81a506" Nov 25 14:41:40 crc kubenswrapper[4796]: I1125 14:41:40.798987 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" Nov 25 14:41:40 crc kubenswrapper[4796]: E1125 14:41:40.807768 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" podUID="e5bf5c53-1a09-4635-9ebb-e2a6fb722e06" Nov 25 14:41:44 crc kubenswrapper[4796]: I1125 14:41:44.787698 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-77bf44fb75-9sjgx" Nov 25 14:41:49 crc kubenswrapper[4796]: E1125 14:41:49.481805 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage2711161980/6\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 14:41:49 crc kubenswrapper[4796]: E1125 14:41:49.483496 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlgbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-wqkh5_openstack-operators(f1937d85-62aa-4880-81ca-91d58ab2fba2): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage2711161980/6\": happened during read: context canceled" logger="UnhandledError" Nov 25 14:41:49 crc kubenswrapper[4796]: E1125 14:41:49.484916 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage2711161980/6\\\": happened during read: context canceled\"]" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" podUID="f1937d85-62aa-4880-81ca-91d58ab2fba2" Nov 25 14:41:51 crc kubenswrapper[4796]: E1125 14:41:51.028809 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" podUID="833cc3da-1e55-4b00-9766-5bc81f81a506" Nov 25 14:41:51 crc kubenswrapper[4796]: I1125 14:41:51.067213 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 14:41:52 crc kubenswrapper[4796]: I1125 14:41:52.157208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" event={"ID":"c20eb9b8-4c87-4145-b550-e887fd680797","Type":"ContainerStarted","Data":"cbd4dfd21aac451b10d1ce5afd7dcb01e4b069a444f8e4e0f756cc4b5b6d64ec"} Nov 25 14:41:52 crc kubenswrapper[4796]: I1125 14:41:52.160156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" event={"ID":"4f74b624-2ef6-4289-8cb1-8d6babc260f5","Type":"ContainerStarted","Data":"8a6d3e1b05751c3d551c0e49427b84e8568a7e6f79135d19c9378ff6fcc1b64e"} Nov 25 14:41:52 crc kubenswrapper[4796]: I1125 14:41:52.162415 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" event={"ID":"5575133b-4226-4a90-b484-aeb1bbcb4dde","Type":"ContainerStarted","Data":"0cbd1838bf1042f81e1339172e39a71c30515f5b4b8eacb7bd64430aaf726679"} Nov 25 14:41:52 crc kubenswrapper[4796]: I1125 14:41:52.165529 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" event={"ID":"efaf4581-131a-496d-ba2f-75db34748600","Type":"ContainerStarted","Data":"578b76627d92d44691a7dd0c4a5a38bd29b14940594b9da3d790ee66e312a68c"} Nov 25 14:41:52 crc kubenswrapper[4796]: I1125 14:41:52.170509 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" event={"ID":"9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff","Type":"ContainerStarted","Data":"c5eef5b01b988133b213987fda2f481047736f87fcc9bdf8499997c5ba2dc719"} Nov 25 14:41:52 crc kubenswrapper[4796]: I1125 14:41:52.176174 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" event={"ID":"5871d7ea-743f-4b9b-9d49-e02f51222ea7","Type":"ContainerStarted","Data":"d66add9f8b13365856ee7c7fdc3ad4391fcbb4be1d1707a6d63015df1bc08847"} Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.198621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" event={"ID":"ed513bf3-e75f-40b3-814e-508f4d9e9ce6","Type":"ContainerStarted","Data":"b65aefa22a1a736825f3e75022d4518aa6ed9c29655c94e2b23c3d5514f0e76b"} Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.217615 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" event={"ID":"5e82891b-b135-4f6a-8341-7ae6efb7d7ab","Type":"ContainerStarted","Data":"1e3d0c9aa4e08be4f722a50ff356afadd5533f8751c5ec840657595d1f473740"} Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.217674 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.217690 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.217712 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.217725 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.217777 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.217807 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.217817 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.218747 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.219004 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.219374 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.219908 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.224468 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.225331 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.226129 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.248431 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6rrmf" podStartSLOduration=4.109498615 podStartE2EDuration="1m3.248409335s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.237750736 +0000 UTC m=+980.580860150" lastFinishedPulling="2025-11-25 14:41:51.376661426 +0000 UTC m=+1039.719770870" observedRunningTime="2025-11-25 14:41:53.238536309 +0000 UTC m=+1041.581645803" watchObservedRunningTime="2025-11-25 14:41:53.248409335 +0000 UTC m=+1041.591518779" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.285942 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-47sfh" podStartSLOduration=3.731997624 podStartE2EDuration="1m3.285924007s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.822673591 +0000 UTC m=+980.165783015" lastFinishedPulling="2025-11-25 14:41:51.376599944 +0000 UTC m=+1039.719709398" observedRunningTime="2025-11-25 14:41:53.281359376 +0000 UTC m=+1041.624468810" watchObservedRunningTime="2025-11-25 14:41:53.285924007 +0000 UTC m=+1041.629033431" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.289283 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-7xmhd" podStartSLOduration=3.841853947 podStartE2EDuration="1m3.28925192s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.931351157 +0000 UTC m=+980.274460581" lastFinishedPulling="2025-11-25 14:41:51.37874909 +0000 UTC m=+1039.721858554" observedRunningTime="2025-11-25 14:41:53.266105703 +0000 UTC m=+1041.609215137" watchObservedRunningTime="2025-11-25 14:41:53.28925192 +0000 UTC m=+1041.632361344" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.305529 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-7c6bw" podStartSLOduration=3.876158659 podStartE2EDuration="1m3.305512354s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.239062957 +0000 UTC m=+980.582172381" lastFinishedPulling="2025-11-25 14:41:51.668416632 +0000 UTC m=+1040.011526076" observedRunningTime="2025-11-25 14:41:53.299349742 +0000 UTC m=+1041.642459166" watchObservedRunningTime="2025-11-25 14:41:53.305512354 +0000 UTC m=+1041.648621768" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.322856 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-7ljk7" podStartSLOduration=3.6089766130000003 podStartE2EDuration="1m3.32283876s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.934949708 +0000 UTC m=+980.278059132" lastFinishedPulling="2025-11-25 14:41:51.648811845 +0000 UTC m=+1039.991921279" observedRunningTime="2025-11-25 14:41:53.321735916 +0000 UTC m=+1041.664845340" watchObservedRunningTime="2025-11-25 14:41:53.32283876 +0000 UTC m=+1041.665948184" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.345264 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-tbmwj" podStartSLOduration=3.517676336 podStartE2EDuration="1m3.345249084s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.822840116 +0000 UTC m=+980.165949540" lastFinishedPulling="2025-11-25 14:41:51.650412834 +0000 UTC m=+1039.993522288" observedRunningTime="2025-11-25 14:41:53.341612471 +0000 UTC m=+1041.684721895" watchObservedRunningTime="2025-11-25 14:41:53.345249084 +0000 UTC m=+1041.688358508" Nov 25 14:41:53 crc kubenswrapper[4796]: I1125 14:41:53.402376 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-w8gkv" podStartSLOduration=3.884834648 podStartE2EDuration="1m3.402359733s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.92014302 +0000 UTC m=+980.263252444" lastFinishedPulling="2025-11-25 14:41:51.437668095 +0000 UTC m=+1039.780777529" observedRunningTime="2025-11-25 14:41:53.397645007 +0000 UTC m=+1041.740754431" watchObservedRunningTime="2025-11-25 14:41:53.402359733 +0000 UTC m=+1041.745469157" Nov 25 14:41:54 crc kubenswrapper[4796]: I1125 14:41:54.209301 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" Nov 25 14:41:54 crc kubenswrapper[4796]: I1125 14:41:54.213156 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" Nov 25 14:41:54 crc kubenswrapper[4796]: I1125 14:41:54.235304 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nfdb6" podStartSLOduration=3.858076539 podStartE2EDuration="1m4.235281289s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.919960424 +0000 UTC m=+980.263069848" lastFinishedPulling="2025-11-25 14:41:52.297165174 +0000 UTC m=+1040.640274598" observedRunningTime="2025-11-25 14:41:54.229365046 +0000 UTC m=+1042.572474480" watchObservedRunningTime="2025-11-25 14:41:54.235281289 +0000 UTC m=+1042.578390733" Nov 25 14:41:56 crc kubenswrapper[4796]: E1125 14:41:56.019143 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" podUID="312c47f9-34dd-4416-b396-fd4f9855e72e" Nov 25 14:41:56 crc kubenswrapper[4796]: I1125 14:41:56.223191 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" event={"ID":"312c47f9-34dd-4416-b396-fd4f9855e72e","Type":"ContainerStarted","Data":"20a46217e573fde393467d89065432284e8a3713d60a03a2cd1affd4332c24c0"} Nov 25 14:41:56 crc kubenswrapper[4796]: I1125 14:41:56.225114 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" event={"ID":"e5bf5c53-1a09-4635-9ebb-e2a6fb722e06","Type":"ContainerStarted","Data":"7d797eab4abadf2c4f96a8866546ef7c326aea4fae5af135d136fd08161c1e5b"} Nov 25 14:41:56 crc kubenswrapper[4796]: I1125 14:41:56.266665 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-v9j5d" podStartSLOduration=30.117902131 podStartE2EDuration="1m6.266645041s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.238052866 +0000 UTC m=+980.581162290" lastFinishedPulling="2025-11-25 14:41:28.386795776 +0000 UTC m=+1016.729905200" observedRunningTime="2025-11-25 14:41:56.259242493 +0000 UTC m=+1044.602351917" watchObservedRunningTime="2025-11-25 14:41:56.266645041 +0000 UTC m=+1044.609754475" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.244903 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" event={"ID":"bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e","Type":"ContainerStarted","Data":"7cdae84553b2f99307937f0b0cded46c263ae6da080722f3cfaa282e70127ce1"} Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.246607 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.246854 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" event={"ID":"f1937d85-62aa-4880-81ca-91d58ab2fba2","Type":"ContainerStarted","Data":"2b2a93148b23698a957b397948aabf616a30fbf9515571f45c505b7b991432a7"} Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.246892 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" event={"ID":"f1937d85-62aa-4880-81ca-91d58ab2fba2","Type":"ContainerStarted","Data":"2c2fa93afc7000a214f569bb28fbbba49e54f59fb3afaab7b7886ab5f6a535e8"} Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.247125 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.248363 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.249621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" event={"ID":"399a4df5-120a-40fc-9570-4555ab767e70","Type":"ContainerStarted","Data":"405b5fe1d8b4c2b65c2429a653d65931305a0cd87cfbf53924b7b95bd6039f88"} Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.249959 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.252193 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" event={"ID":"3a3976ed-e631-4fda-9b60-1e4b62992c70","Type":"ContainerStarted","Data":"d6b1c6970b56ad60ffb79c05237dffd89a4ba8f0fbab1700abc083b21b5ba197"} Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.252447 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.254661 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.254723 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.272375 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-2v9lc" podStartSLOduration=2.696616668 podStartE2EDuration="1m8.27235692s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.331083427 +0000 UTC m=+980.674192851" lastFinishedPulling="2025-11-25 14:41:57.906823669 +0000 UTC m=+1046.249933103" observedRunningTime="2025-11-25 14:41:58.266596882 +0000 UTC m=+1046.609706306" watchObservedRunningTime="2025-11-25 14:41:58.27235692 +0000 UTC m=+1046.615466344" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.343507 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-4w4wl" podStartSLOduration=2.6945540230000002 podStartE2EDuration="1m8.343476892s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.566692013 +0000 UTC m=+979.909801437" lastFinishedPulling="2025-11-25 14:41:57.215614892 +0000 UTC m=+1045.558724306" observedRunningTime="2025-11-25 14:41:58.335156675 +0000 UTC m=+1046.678266099" watchObservedRunningTime="2025-11-25 14:41:58.343476892 +0000 UTC m=+1046.686586316" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.397213 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" podStartSLOduration=3.115032666 podStartE2EDuration="1m8.397190176s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.932161982 +0000 UTC m=+980.275271406" lastFinishedPulling="2025-11-25 14:41:57.214319492 +0000 UTC m=+1045.557428916" observedRunningTime="2025-11-25 14:41:58.393845292 +0000 UTC m=+1046.736954736" watchObservedRunningTime="2025-11-25 14:41:58.397190176 +0000 UTC m=+1046.740299600" Nov 25 14:41:58 crc kubenswrapper[4796]: I1125 14:41:58.443354 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh" podStartSLOduration=4.425403999 podStartE2EDuration="1m8.443321235s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:53.204341472 +0000 UTC m=+981.547450896" lastFinishedPulling="2025-11-25 14:41:57.222258708 +0000 UTC m=+1045.565368132" observedRunningTime="2025-11-25 14:41:58.437111663 +0000 UTC m=+1046.780221087" watchObservedRunningTime="2025-11-25 14:41:58.443321235 +0000 UTC m=+1046.786430669" Nov 25 14:41:58 crc kubenswrapper[4796]: E1125 14:41:58.689103 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" podUID="4e72b995-27a7-4777-9d17-7b04a3933074" Nov 25 14:41:58 crc kubenswrapper[4796]: E1125 14:41:58.898245 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" podUID="7cda050e-831a-42f8-93f7-c33e10a8b119" Nov 25 14:41:59 crc kubenswrapper[4796]: E1125 14:41:59.069300 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 14:41:59 crc kubenswrapper[4796]: E1125 14:41:59.069450 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rpvbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-86dc4d89c8-7q45f_openstack-operators(3472c0d0-0763-4342-83cb-5b7a44e5b2e0): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 25 14:41:59 crc kubenswrapper[4796]: E1125 14:41:59.070644 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" podUID="3472c0d0-0763-4342-83cb-5b7a44e5b2e0" Nov 25 14:41:59 crc kubenswrapper[4796]: E1125 14:41:59.077900 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" podUID="dba98963-8ddb-46d0-a6a7-62f337d6d520" Nov 25 14:41:59 crc kubenswrapper[4796]: I1125 14:41:59.259755 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" event={"ID":"4e72b995-27a7-4777-9d17-7b04a3933074","Type":"ContainerStarted","Data":"b3ae360e21f7bbc43c2aa53fa6d27bae55e4c9632b6e8edaf7255e97eb5b97eb"} Nov 25 14:41:59 crc kubenswrapper[4796]: I1125 14:41:59.261287 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" event={"ID":"7cda050e-831a-42f8-93f7-c33e10a8b119","Type":"ContainerStarted","Data":"03818564ecf0f150108b87370e45df415bfd3cfb266d6ba858a8e06eb14db9e5"} Nov 25 14:41:59 crc kubenswrapper[4796]: I1125 14:41:59.263559 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" event={"ID":"dba98963-8ddb-46d0-a6a7-62f337d6d520","Type":"ContainerStarted","Data":"886ffae7b5faf6296d902d62c58a6e3dbce296d6cbbd044bac8250e72973c75c"} Nov 25 14:41:59 crc kubenswrapper[4796]: I1125 14:41:59.265453 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" Nov 25 14:41:59 crc kubenswrapper[4796]: I1125 14:41:59.268478 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" Nov 25 14:41:59 crc kubenswrapper[4796]: E1125 14:41:59.409289 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" podUID="b652a700-3131-4706-a300-c3f2c54519a3" Nov 25 14:42:00 crc kubenswrapper[4796]: I1125 14:42:00.277250 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" event={"ID":"b652a700-3131-4706-a300-c3f2c54519a3","Type":"ContainerStarted","Data":"85f26b6c78f4bbbab9b2d362c5028103db52ad3aa16b664c71b55a76bb635cc7"} Nov 25 14:42:01 crc kubenswrapper[4796]: I1125 14:42:01.285958 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" event={"ID":"217cf053-2a6e-4fbd-8544-830952c6c803","Type":"ContainerStarted","Data":"b30ededfd32367a44710fec6a58d73e4f8fb912dcdcab3cfbe715cae14f67674"} Nov 25 14:42:01 crc kubenswrapper[4796]: I1125 14:42:01.288257 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" event={"ID":"3472c0d0-0763-4342-83cb-5b7a44e5b2e0","Type":"ContainerStarted","Data":"7c766a304dfa8bf4b66b98cbd778dadc8114df5cd7c386817907f3b2c9db1d64"} Nov 25 14:42:02 crc kubenswrapper[4796]: I1125 14:42:02.336680 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-7q45f" podStartSLOduration=50.153231539 podStartE2EDuration="1m12.336630583s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.373737997 +0000 UTC m=+979.716847421" lastFinishedPulling="2025-11-25 14:41:13.557137031 +0000 UTC m=+1001.900246465" observedRunningTime="2025-11-25 14:42:02.330615337 +0000 UTC m=+1050.673724771" watchObservedRunningTime="2025-11-25 14:42:02.336630583 +0000 UTC m=+1050.679740007" Nov 25 14:42:04 crc kubenswrapper[4796]: E1125 14:42:04.164253 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" podUID="217cf053-2a6e-4fbd-8544-830952c6c803" Nov 25 14:42:04 crc kubenswrapper[4796]: E1125 14:42:04.423700 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" podUID="2d798aaf-7f02-472d-a5c9-53853ce7b2a4" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.326982 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" event={"ID":"7cda050e-831a-42f8-93f7-c33e10a8b119","Type":"ContainerStarted","Data":"61299964ad085fd4ac781685b658c4b75ec512606337ac1dc7e811fd8307ee8c"} Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.327315 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.329174 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" event={"ID":"dba98963-8ddb-46d0-a6a7-62f337d6d520","Type":"ContainerStarted","Data":"d94384d4ecf013048b36b5d9c23d0e8f4b184af5fb0949f34e0f14509f7124b9"} Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.329373 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.330893 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" event={"ID":"217cf053-2a6e-4fbd-8544-830952c6c803","Type":"ContainerStarted","Data":"ffcde3806f30e06db647db22e53138dbda382abcdda97673df2db62709bf7389"} Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.331071 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.332688 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" event={"ID":"b652a700-3131-4706-a300-c3f2c54519a3","Type":"ContainerStarted","Data":"ef38d4862fca849b48e67ca73af317942e86cb6ae0c3762866cef09e45031628"} Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.332863 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.334677 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" event={"ID":"312c47f9-34dd-4416-b396-fd4f9855e72e","Type":"ContainerStarted","Data":"77f13ad63e2fbaa0ea4162d5529bb980484404dd2c07bc0ec4547079334315f3"} Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.335401 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.336804 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" event={"ID":"2d798aaf-7f02-472d-a5c9-53853ce7b2a4","Type":"ContainerStarted","Data":"1a9f9e8e8cb8fbb4286ff51a46a83eeeb0a9f2e9d91fc4f74af128d7b7fb5de8"} Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.339895 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" event={"ID":"4e72b995-27a7-4777-9d17-7b04a3933074","Type":"ContainerStarted","Data":"9d1886418fdbf7c44f766de162547707874491ea6ff7053707ddce0eb2eb22f4"} Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.340074 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.349011 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" podStartSLOduration=3.013979095 podStartE2EDuration="1m15.348991608s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.217427216 +0000 UTC m=+980.560536650" lastFinishedPulling="2025-11-25 14:42:04.552439739 +0000 UTC m=+1052.895549163" observedRunningTime="2025-11-25 14:42:05.344937782 +0000 UTC m=+1053.688047206" watchObservedRunningTime="2025-11-25 14:42:05.348991608 +0000 UTC m=+1053.692101032" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.360695 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" podStartSLOduration=2.532706461 podStartE2EDuration="1m15.36067776s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:51.919921663 +0000 UTC m=+980.263031087" lastFinishedPulling="2025-11-25 14:42:04.747892962 +0000 UTC m=+1053.091002386" observedRunningTime="2025-11-25 14:42:05.360397641 +0000 UTC m=+1053.703507065" watchObservedRunningTime="2025-11-25 14:42:05.36067776 +0000 UTC m=+1053.703787184" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.401989 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" podStartSLOduration=3.091846219 podStartE2EDuration="1m15.401962969s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.242905546 +0000 UTC m=+980.586014970" lastFinishedPulling="2025-11-25 14:42:04.553022296 +0000 UTC m=+1052.896131720" observedRunningTime="2025-11-25 14:42:05.376671095 +0000 UTC m=+1053.719780539" watchObservedRunningTime="2025-11-25 14:42:05.401962969 +0000 UTC m=+1053.745072393" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.417718 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" podStartSLOduration=3.195931692 podStartE2EDuration="1m15.417692565s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.329065125 +0000 UTC m=+980.672174549" lastFinishedPulling="2025-11-25 14:42:04.550825998 +0000 UTC m=+1052.893935422" observedRunningTime="2025-11-25 14:42:05.41331609 +0000 UTC m=+1053.756425524" watchObservedRunningTime="2025-11-25 14:42:05.417692565 +0000 UTC m=+1053.760801989" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.430780 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" podStartSLOduration=3.119667999 podStartE2EDuration="1m15.430763881s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.240379507 +0000 UTC m=+980.583488941" lastFinishedPulling="2025-11-25 14:42:04.551475399 +0000 UTC m=+1052.894584823" observedRunningTime="2025-11-25 14:42:05.429265454 +0000 UTC m=+1053.772374888" watchObservedRunningTime="2025-11-25 14:42:05.430763881 +0000 UTC m=+1053.773873305" Nov 25 14:42:05 crc kubenswrapper[4796]: I1125 14:42:05.451847 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" podStartSLOduration=3.439521356 podStartE2EDuration="1m15.451825993s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.208714157 +0000 UTC m=+980.551823591" lastFinishedPulling="2025-11-25 14:42:04.221018804 +0000 UTC m=+1052.564128228" observedRunningTime="2025-11-25 14:42:05.445746424 +0000 UTC m=+1053.788855848" watchObservedRunningTime="2025-11-25 14:42:05.451825993 +0000 UTC m=+1053.794935427" Nov 25 14:42:06 crc kubenswrapper[4796]: I1125 14:42:06.347885 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" event={"ID":"833cc3da-1e55-4b00-9766-5bc81f81a506","Type":"ContainerStarted","Data":"b9f9fc26f6dc85edc2e6df17985af208dd5bb0e3d82ed817d942c267243965ae"} Nov 25 14:42:06 crc kubenswrapper[4796]: I1125 14:42:06.350352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" event={"ID":"2d798aaf-7f02-472d-a5c9-53853ce7b2a4","Type":"ContainerStarted","Data":"c57f49ede30d146a32f87a422db9b7f9e00405e203091cfe6a8449b19d813402"} Nov 25 14:42:06 crc kubenswrapper[4796]: I1125 14:42:06.361608 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7b78n" podStartSLOduration=2.852075632 podStartE2EDuration="1m16.361580639s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.32926845 +0000 UTC m=+980.672377874" lastFinishedPulling="2025-11-25 14:42:05.838773427 +0000 UTC m=+1054.181882881" observedRunningTime="2025-11-25 14:42:06.360289919 +0000 UTC m=+1054.703399343" watchObservedRunningTime="2025-11-25 14:42:06.361580639 +0000 UTC m=+1054.704690063" Nov 25 14:42:06 crc kubenswrapper[4796]: I1125 14:42:06.383783 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" podStartSLOduration=2.730679532 podStartE2EDuration="1m16.383756545s" podCreationTimestamp="2025-11-25 14:40:50 +0000 UTC" firstStartedPulling="2025-11-25 14:40:52.285954479 +0000 UTC m=+980.629063903" lastFinishedPulling="2025-11-25 14:42:05.939031492 +0000 UTC m=+1054.282140916" observedRunningTime="2025-11-25 14:42:06.380272058 +0000 UTC m=+1054.723381502" watchObservedRunningTime="2025-11-25 14:42:06.383756545 +0000 UTC m=+1054.726865969" Nov 25 14:42:10 crc kubenswrapper[4796]: I1125 14:42:10.745639 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-wqkh5" Nov 25 14:42:10 crc kubenswrapper[4796]: I1125 14:42:10.790896 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-h96k8" Nov 25 14:42:10 crc kubenswrapper[4796]: I1125 14:42:10.804813 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-mfg66" Nov 25 14:42:10 crc kubenswrapper[4796]: I1125 14:42:10.823770 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-jsccj" Nov 25 14:42:10 crc kubenswrapper[4796]: I1125 14:42:10.870571 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" Nov 25 14:42:10 crc kubenswrapper[4796]: I1125 14:42:10.913969 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-z7q4q" Nov 25 14:42:11 crc kubenswrapper[4796]: I1125 14:42:11.152536 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-99zgm" Nov 25 14:42:11 crc kubenswrapper[4796]: I1125 14:42:11.409525 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-6bbxk" Nov 25 14:42:20 crc kubenswrapper[4796]: I1125 14:42:20.872543 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg56b" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.422639 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kspv4"] Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.424238 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.427198 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.427399 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.428830 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.430452 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-frll4" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.433594 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kspv4"] Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.514308 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-csg29"] Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.515981 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.517655 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.531727 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-csg29"] Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.606159 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzgts\" (UniqueName: \"kubernetes.io/projected/cdba9c40-fa3e-4234-8217-5ea48d209af0-kube-api-access-rzgts\") pod \"dnsmasq-dns-675f4bcbfc-kspv4\" (UID: \"cdba9c40-fa3e-4234-8217-5ea48d209af0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.606343 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdba9c40-fa3e-4234-8217-5ea48d209af0-config\") pod \"dnsmasq-dns-675f4bcbfc-kspv4\" (UID: \"cdba9c40-fa3e-4234-8217-5ea48d209af0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.708245 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-config\") pod \"dnsmasq-dns-78dd6ddcc-csg29\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.708319 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdba9c40-fa3e-4234-8217-5ea48d209af0-config\") pod \"dnsmasq-dns-675f4bcbfc-kspv4\" (UID: \"cdba9c40-fa3e-4234-8217-5ea48d209af0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.708399 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v6fv\" (UniqueName: \"kubernetes.io/projected/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-kube-api-access-4v6fv\") pod \"dnsmasq-dns-78dd6ddcc-csg29\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.708456 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzgts\" (UniqueName: \"kubernetes.io/projected/cdba9c40-fa3e-4234-8217-5ea48d209af0-kube-api-access-rzgts\") pod \"dnsmasq-dns-675f4bcbfc-kspv4\" (UID: \"cdba9c40-fa3e-4234-8217-5ea48d209af0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.708483 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-csg29\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.709140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdba9c40-fa3e-4234-8217-5ea48d209af0-config\") pod \"dnsmasq-dns-675f4bcbfc-kspv4\" (UID: \"cdba9c40-fa3e-4234-8217-5ea48d209af0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.730209 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzgts\" (UniqueName: \"kubernetes.io/projected/cdba9c40-fa3e-4234-8217-5ea48d209af0-kube-api-access-rzgts\") pod \"dnsmasq-dns-675f4bcbfc-kspv4\" (UID: \"cdba9c40-fa3e-4234-8217-5ea48d209af0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.741101 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.810972 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v6fv\" (UniqueName: \"kubernetes.io/projected/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-kube-api-access-4v6fv\") pod \"dnsmasq-dns-78dd6ddcc-csg29\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.811063 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-csg29\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.811110 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-config\") pod \"dnsmasq-dns-78dd6ddcc-csg29\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.812535 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-config\") pod \"dnsmasq-dns-78dd6ddcc-csg29\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.812614 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-csg29\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:36 crc kubenswrapper[4796]: I1125 14:42:36.834110 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v6fv\" (UniqueName: \"kubernetes.io/projected/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-kube-api-access-4v6fv\") pod \"dnsmasq-dns-78dd6ddcc-csg29\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:37 crc kubenswrapper[4796]: I1125 14:42:37.131064 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:37 crc kubenswrapper[4796]: I1125 14:42:37.153891 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kspv4"] Nov 25 14:42:37 crc kubenswrapper[4796]: I1125 14:42:37.390262 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-csg29"] Nov 25 14:42:37 crc kubenswrapper[4796]: I1125 14:42:37.572619 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" event={"ID":"cdba9c40-fa3e-4234-8217-5ea48d209af0","Type":"ContainerStarted","Data":"e040ecd14414917c2887a6c54ef2150b8fc1f5163eba25099c6f846bf16cc2ee"} Nov 25 14:42:37 crc kubenswrapper[4796]: I1125 14:42:37.574291 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" event={"ID":"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2","Type":"ContainerStarted","Data":"cad390ab6174722b8b8885393f6bf4df041084cdc892c54eed92a540053f52d5"} Nov 25 14:42:39 crc kubenswrapper[4796]: I1125 14:42:39.817995 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kspv4"] Nov 25 14:42:39 crc kubenswrapper[4796]: I1125 14:42:39.837788 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zv5xz"] Nov 25 14:42:39 crc kubenswrapper[4796]: I1125 14:42:39.838913 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:39 crc kubenswrapper[4796]: I1125 14:42:39.850333 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zv5xz"] Nov 25 14:42:39 crc kubenswrapper[4796]: I1125 14:42:39.962255 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-config\") pod \"dnsmasq-dns-666b6646f7-zv5xz\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:39 crc kubenswrapper[4796]: I1125 14:42:39.962324 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zv5xz\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:39 crc kubenswrapper[4796]: I1125 14:42:39.962353 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6xt4\" (UniqueName: \"kubernetes.io/projected/e67870b3-2007-43e4-86cc-d4e4153c3e15-kube-api-access-p6xt4\") pod \"dnsmasq-dns-666b6646f7-zv5xz\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.063401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xt4\" (UniqueName: \"kubernetes.io/projected/e67870b3-2007-43e4-86cc-d4e4153c3e15-kube-api-access-p6xt4\") pod \"dnsmasq-dns-666b6646f7-zv5xz\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.063500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-config\") pod \"dnsmasq-dns-666b6646f7-zv5xz\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.063542 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zv5xz\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.064425 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zv5xz\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.064665 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-config\") pod \"dnsmasq-dns-666b6646f7-zv5xz\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.092302 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6xt4\" (UniqueName: \"kubernetes.io/projected/e67870b3-2007-43e4-86cc-d4e4153c3e15-kube-api-access-p6xt4\") pod \"dnsmasq-dns-666b6646f7-zv5xz\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.156235 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-csg29"] Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.182296 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kcw2g"] Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.186738 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.187848 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.202545 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kcw2g"] Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.272911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kcw2g\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.272985 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-config\") pod \"dnsmasq-dns-57d769cc4f-kcw2g\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.273086 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr2vm\" (UniqueName: \"kubernetes.io/projected/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-kube-api-access-tr2vm\") pod \"dnsmasq-dns-57d769cc4f-kcw2g\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.373606 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kcw2g\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.373653 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-config\") pod \"dnsmasq-dns-57d769cc4f-kcw2g\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.373713 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr2vm\" (UniqueName: \"kubernetes.io/projected/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-kube-api-access-tr2vm\") pod \"dnsmasq-dns-57d769cc4f-kcw2g\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.374474 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kcw2g\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.374502 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-config\") pod \"dnsmasq-dns-57d769cc4f-kcw2g\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.389540 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr2vm\" (UniqueName: \"kubernetes.io/projected/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-kube-api-access-tr2vm\") pod \"dnsmasq-dns-57d769cc4f-kcw2g\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:40 crc kubenswrapper[4796]: I1125 14:42:40.514926 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.045892 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.047603 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.056058 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.056237 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.056426 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-r8wzf" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.056594 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.056801 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.056961 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.057174 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.061181 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.184730 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.184777 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.184802 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df357d5a-93ca-48cc-bcec-b01ba247136e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.184877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.184918 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.184944 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmzx\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-kube-api-access-vkmzx\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.185016 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.185051 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-config-data\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.185072 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.185131 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df357d5a-93ca-48cc-bcec-b01ba247136e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.185153 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286078 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df357d5a-93ca-48cc-bcec-b01ba247136e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286174 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286204 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286239 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkmzx\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-kube-api-access-vkmzx\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286313 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-config-data\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286343 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286387 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df357d5a-93ca-48cc-bcec-b01ba247136e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286440 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.286842 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.287080 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.287369 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.288930 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.289521 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-config-data\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.291051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df357d5a-93ca-48cc-bcec-b01ba247136e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.292504 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.293344 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df357d5a-93ca-48cc-bcec-b01ba247136e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.296187 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.298830 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.306518 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkmzx\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-kube-api-access-vkmzx\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.317294 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.318196 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.319390 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.326307 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bx2jc" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.326901 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.327358 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.327583 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.327362 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.328157 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.328850 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.341294 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.381270 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.488777 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.488887 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.488930 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz6p2\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-kube-api-access-gz6p2\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.488956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.488976 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.489014 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.489038 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.489067 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1729cee4-39e5-4e3c-90ed-51b16a110a6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.489095 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.489116 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1729cee4-39e5-4e3c-90ed-51b16a110a6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.489143 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590361 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590406 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590424 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz6p2\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-kube-api-access-gz6p2\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590448 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590467 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590494 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590513 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590540 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1729cee4-39e5-4e3c-90ed-51b16a110a6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590593 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1729cee4-39e5-4e3c-90ed-51b16a110a6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.590618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.591382 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.592425 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.592558 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.592638 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.593061 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.593276 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.596167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1729cee4-39e5-4e3c-90ed-51b16a110a6a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.601533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.601996 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1729cee4-39e5-4e3c-90ed-51b16a110a6a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.622019 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.626415 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz6p2\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-kube-api-access-gz6p2\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.635870 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:41 crc kubenswrapper[4796]: I1125 14:42:41.684352 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.801248 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.802700 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.804673 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-56cg4" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.804845 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.805690 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.805713 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.819697 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.826439 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.924183 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25a388f4-cd5a-404d-a777-46f4410e0b3a-kolla-config\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.924223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25a388f4-cd5a-404d-a777-46f4410e0b3a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.924253 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a388f4-cd5a-404d-a777-46f4410e0b3a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.924409 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.924439 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25a388f4-cd5a-404d-a777-46f4410e0b3a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.924463 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbz8q\" (UniqueName: \"kubernetes.io/projected/25a388f4-cd5a-404d-a777-46f4410e0b3a-kube-api-access-lbz8q\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.924536 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25a388f4-cd5a-404d-a777-46f4410e0b3a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:42 crc kubenswrapper[4796]: I1125 14:42:42.924679 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25a388f4-cd5a-404d-a777-46f4410e0b3a-config-data-default\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.026208 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25a388f4-cd5a-404d-a777-46f4410e0b3a-config-data-default\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.026275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25a388f4-cd5a-404d-a777-46f4410e0b3a-kolla-config\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.026290 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25a388f4-cd5a-404d-a777-46f4410e0b3a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.026312 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a388f4-cd5a-404d-a777-46f4410e0b3a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.026348 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25a388f4-cd5a-404d-a777-46f4410e0b3a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.026365 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.026381 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbz8q\" (UniqueName: \"kubernetes.io/projected/25a388f4-cd5a-404d-a777-46f4410e0b3a-kube-api-access-lbz8q\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.026424 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25a388f4-cd5a-404d-a777-46f4410e0b3a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.026885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25a388f4-cd5a-404d-a777-46f4410e0b3a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.027033 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.027608 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25a388f4-cd5a-404d-a777-46f4410e0b3a-kolla-config\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.027906 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25a388f4-cd5a-404d-a777-46f4410e0b3a-config-data-default\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.028416 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25a388f4-cd5a-404d-a777-46f4410e0b3a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.032332 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a388f4-cd5a-404d-a777-46f4410e0b3a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.032834 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25a388f4-cd5a-404d-a777-46f4410e0b3a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.084152 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbz8q\" (UniqueName: \"kubernetes.io/projected/25a388f4-cd5a-404d-a777-46f4410e0b3a-kube-api-access-lbz8q\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.122121 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"25a388f4-cd5a-404d-a777-46f4410e0b3a\") " pod="openstack/openstack-galera-0" Nov 25 14:42:43 crc kubenswrapper[4796]: I1125 14:42:43.133288 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.092783 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.096647 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.099321 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zlz65" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.100342 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.100390 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.100479 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.109828 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.245079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxgm\" (UniqueName: \"kubernetes.io/projected/fba50302-0f98-4117-ae49-f710e1543e98-kube-api-access-xfxgm\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.245214 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba50302-0f98-4117-ae49-f710e1543e98-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.245243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fba50302-0f98-4117-ae49-f710e1543e98-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.245274 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba50302-0f98-4117-ae49-f710e1543e98-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.245409 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fba50302-0f98-4117-ae49-f710e1543e98-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.245466 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fba50302-0f98-4117-ae49-f710e1543e98-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.245494 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba50302-0f98-4117-ae49-f710e1543e98-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.245625 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba50302-0f98-4117-ae49-f710e1543e98-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fba50302-0f98-4117-ae49-f710e1543e98-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347246 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba50302-0f98-4117-ae49-f710e1543e98-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347266 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fba50302-0f98-4117-ae49-f710e1543e98-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347284 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fba50302-0f98-4117-ae49-f710e1543e98-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347299 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba50302-0f98-4117-ae49-f710e1543e98-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347338 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347362 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfxgm\" (UniqueName: \"kubernetes.io/projected/fba50302-0f98-4117-ae49-f710e1543e98-kube-api-access-xfxgm\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347714 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fba50302-0f98-4117-ae49-f710e1543e98-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.347922 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.349157 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba50302-0f98-4117-ae49-f710e1543e98-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.349390 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fba50302-0f98-4117-ae49-f710e1543e98-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.350145 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fba50302-0f98-4117-ae49-f710e1543e98-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.359063 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba50302-0f98-4117-ae49-f710e1543e98-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.359411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba50302-0f98-4117-ae49-f710e1543e98-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.365533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.370951 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfxgm\" (UniqueName: \"kubernetes.io/projected/fba50302-0f98-4117-ae49-f710e1543e98-kube-api-access-xfxgm\") pod \"openstack-cell1-galera-0\" (UID: \"fba50302-0f98-4117-ae49-f710e1543e98\") " pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.423466 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.561430 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.562361 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.567974 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.568197 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.568324 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ljf8v" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.596625 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.650957 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/241f82db-29d5-4cb8-bd81-3e758b9cd855-memcached-tls-certs\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.651026 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzp7\" (UniqueName: \"kubernetes.io/projected/241f82db-29d5-4cb8-bd81-3e758b9cd855-kube-api-access-sxzp7\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.651128 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/241f82db-29d5-4cb8-bd81-3e758b9cd855-config-data\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.651144 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241f82db-29d5-4cb8-bd81-3e758b9cd855-combined-ca-bundle\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.651162 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/241f82db-29d5-4cb8-bd81-3e758b9cd855-kolla-config\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.752957 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/241f82db-29d5-4cb8-bd81-3e758b9cd855-config-data\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.752999 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241f82db-29d5-4cb8-bd81-3e758b9cd855-combined-ca-bundle\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.753017 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/241f82db-29d5-4cb8-bd81-3e758b9cd855-kolla-config\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.753060 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/241f82db-29d5-4cb8-bd81-3e758b9cd855-memcached-tls-certs\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.753103 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzp7\" (UniqueName: \"kubernetes.io/projected/241f82db-29d5-4cb8-bd81-3e758b9cd855-kube-api-access-sxzp7\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.753845 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/241f82db-29d5-4cb8-bd81-3e758b9cd855-config-data\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.754868 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/241f82db-29d5-4cb8-bd81-3e758b9cd855-kolla-config\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.767389 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/241f82db-29d5-4cb8-bd81-3e758b9cd855-combined-ca-bundle\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.767795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/241f82db-29d5-4cb8-bd81-3e758b9cd855-memcached-tls-certs\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.796186 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzp7\" (UniqueName: \"kubernetes.io/projected/241f82db-29d5-4cb8-bd81-3e758b9cd855-kube-api-access-sxzp7\") pod \"memcached-0\" (UID: \"241f82db-29d5-4cb8-bd81-3e758b9cd855\") " pod="openstack/memcached-0" Nov 25 14:42:44 crc kubenswrapper[4796]: I1125 14:42:44.881908 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 14:42:46 crc kubenswrapper[4796]: I1125 14:42:46.481363 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 14:42:46 crc kubenswrapper[4796]: I1125 14:42:46.483035 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 14:42:46 crc kubenswrapper[4796]: I1125 14:42:46.485639 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-fdsvk" Nov 25 14:42:46 crc kubenswrapper[4796]: I1125 14:42:46.492783 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 14:42:46 crc kubenswrapper[4796]: I1125 14:42:46.580552 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqw2\" (UniqueName: \"kubernetes.io/projected/93d132ee-a59b-4244-8d56-895b7a49b14d-kube-api-access-bgqw2\") pod \"kube-state-metrics-0\" (UID: \"93d132ee-a59b-4244-8d56-895b7a49b14d\") " pod="openstack/kube-state-metrics-0" Nov 25 14:42:46 crc kubenswrapper[4796]: I1125 14:42:46.682757 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqw2\" (UniqueName: \"kubernetes.io/projected/93d132ee-a59b-4244-8d56-895b7a49b14d-kube-api-access-bgqw2\") pod \"kube-state-metrics-0\" (UID: \"93d132ee-a59b-4244-8d56-895b7a49b14d\") " pod="openstack/kube-state-metrics-0" Nov 25 14:42:46 crc kubenswrapper[4796]: I1125 14:42:46.701407 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqw2\" (UniqueName: \"kubernetes.io/projected/93d132ee-a59b-4244-8d56-895b7a49b14d-kube-api-access-bgqw2\") pod \"kube-state-metrics-0\" (UID: \"93d132ee-a59b-4244-8d56-895b7a49b14d\") " pod="openstack/kube-state-metrics-0" Nov 25 14:42:46 crc kubenswrapper[4796]: I1125 14:42:46.816069 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 14:42:48 crc kubenswrapper[4796]: I1125 14:42:48.107650 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.179603 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jftkt"] Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.180798 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.185039 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.185156 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.185045 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-vfpxk" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.192731 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jftkt"] Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.200355 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bcptz"] Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.201893 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.257690 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bcptz"] Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.351043 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-var-log-ovn\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.351358 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-var-run\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.351506 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-combined-ca-bundle\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.351655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txpg\" (UniqueName: \"kubernetes.io/projected/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-kube-api-access-5txpg\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.351757 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-var-lib\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.351856 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-etc-ovs\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.351925 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-ovn-controller-tls-certs\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.351993 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5dhx\" (UniqueName: \"kubernetes.io/projected/130773d9-cc1a-46d3-91a4-1880735e0351-kube-api-access-q5dhx\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.352070 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-var-log\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.352146 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-scripts\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.352222 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/130773d9-cc1a-46d3-91a4-1880735e0351-scripts\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.352302 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-var-run\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.352374 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-var-run-ovn\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453521 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-var-log\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453584 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-scripts\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453652 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/130773d9-cc1a-46d3-91a4-1880735e0351-scripts\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453675 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-var-run\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453699 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-var-run-ovn\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453764 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-var-log-ovn\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453792 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-var-run\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453825 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-combined-ca-bundle\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453856 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txpg\" (UniqueName: \"kubernetes.io/projected/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-kube-api-access-5txpg\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453883 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-var-lib\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453944 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-etc-ovs\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453968 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-ovn-controller-tls-certs\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.453997 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5dhx\" (UniqueName: \"kubernetes.io/projected/130773d9-cc1a-46d3-91a4-1880735e0351-kube-api-access-q5dhx\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.454167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-var-log\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.454748 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-var-lib\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.454890 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-etc-ovs\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.455365 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-var-run-ovn\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.455473 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/130773d9-cc1a-46d3-91a4-1880735e0351-var-run\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.455567 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-var-log-ovn\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.455643 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-var-run\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.455884 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/130773d9-cc1a-46d3-91a4-1880735e0351-scripts\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.458024 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-scripts\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.458948 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-ovn-controller-tls-certs\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.460332 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-combined-ca-bundle\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.469617 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txpg\" (UniqueName: \"kubernetes.io/projected/9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718-kube-api-access-5txpg\") pod \"ovn-controller-jftkt\" (UID: \"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718\") " pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.470364 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5dhx\" (UniqueName: \"kubernetes.io/projected/130773d9-cc1a-46d3-91a4-1880735e0351-kube-api-access-q5dhx\") pod \"ovn-controller-ovs-bcptz\" (UID: \"130773d9-cc1a-46d3-91a4-1880735e0351\") " pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.500914 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jftkt" Nov 25 14:42:50 crc kubenswrapper[4796]: I1125 14:42:50.513167 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.091763 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.093718 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.098513 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.098806 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.098947 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.099377 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-lxqjz" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.099768 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.103832 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.266917 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.266959 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.266993 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.267022 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-config\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.267055 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.267109 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.267129 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bbsj\" (UniqueName: \"kubernetes.io/projected/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-kube-api-access-2bbsj\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.267227 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.368590 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-config\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.368697 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.368743 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.368766 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bbsj\" (UniqueName: \"kubernetes.io/projected/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-kube-api-access-2bbsj\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.368798 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.368865 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.368892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.368928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.369312 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.369353 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-config\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.369910 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.373715 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.375145 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.375208 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.377338 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.389989 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bbsj\" (UniqueName: \"kubernetes.io/projected/1c9e8c13-5a24-4394-bdc8-aa4965e931b8-kube-api-access-2bbsj\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.401708 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1c9e8c13-5a24-4394-bdc8-aa4965e931b8\") " pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:51 crc kubenswrapper[4796]: I1125 14:42:51.418507 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 14:42:52 crc kubenswrapper[4796]: I1125 14:42:52.969625 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 14:42:52 crc kubenswrapper[4796]: I1125 14:42:52.972920 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:52 crc kubenswrapper[4796]: I1125 14:42:52.974667 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 14:42:52 crc kubenswrapper[4796]: I1125 14:42:52.974981 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 14:42:52 crc kubenswrapper[4796]: I1125 14:42:52.975379 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 14:42:52 crc kubenswrapper[4796]: I1125 14:42:52.975817 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7kgds" Nov 25 14:42:52 crc kubenswrapper[4796]: I1125 14:42:52.984500 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.098791 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-config\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.098860 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.099087 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.099159 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b87pl\" (UniqueName: \"kubernetes.io/projected/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-kube-api-access-b87pl\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.099271 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.099355 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.099422 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.099473 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.201438 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-config\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.201541 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.201624 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.201668 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b87pl\" (UniqueName: \"kubernetes.io/projected/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-kube-api-access-b87pl\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.201713 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.201747 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.201785 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.201810 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.202635 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-config\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.203135 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.203446 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.204635 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.208116 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.223555 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.240624 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b87pl\" (UniqueName: \"kubernetes.io/projected/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-kube-api-access-b87pl\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.240854 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.246529 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064\") " pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: I1125 14:42:53.303733 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 14:42:53 crc kubenswrapper[4796]: W1125 14:42:53.958372 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1729cee4_39e5_4e3c_90ed_51b16a110a6a.slice/crio-4194d5f27c5d9412a620f2b0859b5330fae76cf094dd7d9e110177cc6418d04e WatchSource:0}: Error finding container 4194d5f27c5d9412a620f2b0859b5330fae76cf094dd7d9e110177cc6418d04e: Status 404 returned error can't find the container with id 4194d5f27c5d9412a620f2b0859b5330fae76cf094dd7d9e110177cc6418d04e Nov 25 14:42:54 crc kubenswrapper[4796]: I1125 14:42:54.723139 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1729cee4-39e5-4e3c-90ed-51b16a110a6a","Type":"ContainerStarted","Data":"4194d5f27c5d9412a620f2b0859b5330fae76cf094dd7d9e110177cc6418d04e"} Nov 25 14:42:54 crc kubenswrapper[4796]: E1125 14:42:54.861971 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 14:42:54 crc kubenswrapper[4796]: E1125 14:42:54.862143 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4v6fv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-csg29_openstack(0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:42:54 crc kubenswrapper[4796]: E1125 14:42:54.863811 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" podUID="0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2" Nov 25 14:42:54 crc kubenswrapper[4796]: E1125 14:42:54.885279 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 14:42:54 crc kubenswrapper[4796]: E1125 14:42:54.885691 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzgts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kspv4_openstack(cdba9c40-fa3e-4234-8217-5ea48d209af0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:42:54 crc kubenswrapper[4796]: E1125 14:42:54.886865 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" podUID="cdba9c40-fa3e-4234-8217-5ea48d209af0" Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.425341 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jftkt"] Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.743485 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jftkt" event={"ID":"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718","Type":"ContainerStarted","Data":"ec3a7426c00391538d1a177a8394df43607c0ec4cebad5742a1746d51ba1a122"} Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.798593 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.811599 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kcw2g"] Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.823656 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zv5xz"] Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.833084 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.846853 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 14:42:55 crc kubenswrapper[4796]: W1125 14:42:55.876282 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25a388f4_cd5a_404d_a777_46f4410e0b3a.slice/crio-8530b2ef85952647493887dc255b4519fdb306d310d35080e275035d252f2416 WatchSource:0}: Error finding container 8530b2ef85952647493887dc255b4519fdb306d310d35080e275035d252f2416: Status 404 returned error can't find the container with id 8530b2ef85952647493887dc255b4519fdb306d310d35080e275035d252f2416 Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.876324 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.883431 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 14:42:55 crc kubenswrapper[4796]: I1125 14:42:55.941503 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bcptz"] Nov 25 14:42:55 crc kubenswrapper[4796]: W1125 14:42:55.948587 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod130773d9_cc1a_46d3_91a4_1880735e0351.slice/crio-8a3c0c1dc250c5b5ec4b4ade02195bb6976d83c670af64965ba7f0d61b032462 WatchSource:0}: Error finding container 8a3c0c1dc250c5b5ec4b4ade02195bb6976d83c670af64965ba7f0d61b032462: Status 404 returned error can't find the container with id 8a3c0c1dc250c5b5ec4b4ade02195bb6976d83c670af64965ba7f0d61b032462 Nov 25 14:42:56 crc kubenswrapper[4796]: I1125 14:42:56.750719 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fba50302-0f98-4117-ae49-f710e1543e98","Type":"ContainerStarted","Data":"ac1391cc56200e4fd75a5f29157c9452e318b41f4c4705c98e4281da21f01d0e"} Nov 25 14:42:56 crc kubenswrapper[4796]: I1125 14:42:56.751736 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" event={"ID":"a0e1037c-c549-4fcf-9c16-13721a1b8bd3","Type":"ContainerStarted","Data":"1bbceb00d1bc81a6772148a90c606ec1b9eeefb6ddac21b415697743b5d24290"} Nov 25 14:42:56 crc kubenswrapper[4796]: I1125 14:42:56.752866 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bcptz" event={"ID":"130773d9-cc1a-46d3-91a4-1880735e0351","Type":"ContainerStarted","Data":"8a3c0c1dc250c5b5ec4b4ade02195bb6976d83c670af64965ba7f0d61b032462"} Nov 25 14:42:56 crc kubenswrapper[4796]: I1125 14:42:56.753950 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df357d5a-93ca-48cc-bcec-b01ba247136e","Type":"ContainerStarted","Data":"84c922b6bf5dec75ade0e3eccb6293a40c8ff11ba8209679e57238bf5be8a933"} Nov 25 14:42:56 crc kubenswrapper[4796]: I1125 14:42:56.755164 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"93d132ee-a59b-4244-8d56-895b7a49b14d","Type":"ContainerStarted","Data":"b7d14141aec75a925f2f9b736fa8d743a0ae98929084c261d535282e097f58c0"} Nov 25 14:42:56 crc kubenswrapper[4796]: I1125 14:42:56.756311 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"25a388f4-cd5a-404d-a777-46f4410e0b3a","Type":"ContainerStarted","Data":"8530b2ef85952647493887dc255b4519fdb306d310d35080e275035d252f2416"} Nov 25 14:42:56 crc kubenswrapper[4796]: I1125 14:42:56.757420 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"241f82db-29d5-4cb8-bd81-3e758b9cd855","Type":"ContainerStarted","Data":"7186f693b342af1cb9fd46e835618350dda8cbea17cdc0de4f96a83104f64c74"} Nov 25 14:42:56 crc kubenswrapper[4796]: I1125 14:42:56.758357 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" event={"ID":"e67870b3-2007-43e4-86cc-d4e4153c3e15","Type":"ContainerStarted","Data":"a790560c293f2e96d784a99fefbb99ed6c83b119af7faa0ba2c9ecdfdc84ff2d"} Nov 25 14:42:56 crc kubenswrapper[4796]: I1125 14:42:56.904895 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.002613 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 14:42:57 crc kubenswrapper[4796]: W1125 14:42:57.394465 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f7a16fc_0fd1_4d5b_ae32_9f7d95e8a064.slice/crio-097ee100f51909aef55740d705dc504a095afacee153598d426e3c80f61c06dd WatchSource:0}: Error finding container 097ee100f51909aef55740d705dc504a095afacee153598d426e3c80f61c06dd: Status 404 returned error can't find the container with id 097ee100f51909aef55740d705dc504a095afacee153598d426e3c80f61c06dd Nov 25 14:42:57 crc kubenswrapper[4796]: W1125 14:42:57.395278 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c9e8c13_5a24_4394_bdc8_aa4965e931b8.slice/crio-03764e423cb66e6221f9bfb0ba905f3965ae60c0233fb6d5ecbfd5988f285d5c WatchSource:0}: Error finding container 03764e423cb66e6221f9bfb0ba905f3965ae60c0233fb6d5ecbfd5988f285d5c: Status 404 returned error can't find the container with id 03764e423cb66e6221f9bfb0ba905f3965ae60c0233fb6d5ecbfd5988f285d5c Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.459020 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.464401 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.590165 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzgts\" (UniqueName: \"kubernetes.io/projected/cdba9c40-fa3e-4234-8217-5ea48d209af0-kube-api-access-rzgts\") pod \"cdba9c40-fa3e-4234-8217-5ea48d209af0\" (UID: \"cdba9c40-fa3e-4234-8217-5ea48d209af0\") " Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.590256 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-dns-svc\") pod \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.590612 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdba9c40-fa3e-4234-8217-5ea48d209af0-config\") pod \"cdba9c40-fa3e-4234-8217-5ea48d209af0\" (UID: \"cdba9c40-fa3e-4234-8217-5ea48d209af0\") " Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.590703 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-config\") pod \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.590728 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v6fv\" (UniqueName: \"kubernetes.io/projected/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-kube-api-access-4v6fv\") pod \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\" (UID: \"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2\") " Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.590750 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2" (UID: "0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.591095 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdba9c40-fa3e-4234-8217-5ea48d209af0-config" (OuterVolumeSpecName: "config") pod "cdba9c40-fa3e-4234-8217-5ea48d209af0" (UID: "cdba9c40-fa3e-4234-8217-5ea48d209af0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.591263 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdba9c40-fa3e-4234-8217-5ea48d209af0-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.591280 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.591643 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-config" (OuterVolumeSpecName: "config") pod "0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2" (UID: "0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.596082 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-kube-api-access-4v6fv" (OuterVolumeSpecName: "kube-api-access-4v6fv") pod "0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2" (UID: "0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2"). InnerVolumeSpecName "kube-api-access-4v6fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.596918 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdba9c40-fa3e-4234-8217-5ea48d209af0-kube-api-access-rzgts" (OuterVolumeSpecName: "kube-api-access-rzgts") pod "cdba9c40-fa3e-4234-8217-5ea48d209af0" (UID: "cdba9c40-fa3e-4234-8217-5ea48d209af0"). InnerVolumeSpecName "kube-api-access-rzgts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.693828 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.693922 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v6fv\" (UniqueName: \"kubernetes.io/projected/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2-kube-api-access-4v6fv\") on node \"crc\" DevicePath \"\"" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.693942 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzgts\" (UniqueName: \"kubernetes.io/projected/cdba9c40-fa3e-4234-8217-5ea48d209af0-kube-api-access-rzgts\") on node \"crc\" DevicePath \"\"" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.771331 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064","Type":"ContainerStarted","Data":"097ee100f51909aef55740d705dc504a095afacee153598d426e3c80f61c06dd"} Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.772825 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" event={"ID":"0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2","Type":"ContainerDied","Data":"cad390ab6174722b8b8885393f6bf4df041084cdc892c54eed92a540053f52d5"} Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.772852 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-csg29" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.774254 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" event={"ID":"cdba9c40-fa3e-4234-8217-5ea48d209af0","Type":"ContainerDied","Data":"e040ecd14414917c2887a6c54ef2150b8fc1f5163eba25099c6f846bf16cc2ee"} Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.774312 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kspv4" Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.775669 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1c9e8c13-5a24-4394-bdc8-aa4965e931b8","Type":"ContainerStarted","Data":"03764e423cb66e6221f9bfb0ba905f3965ae60c0233fb6d5ecbfd5988f285d5c"} Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.832453 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-csg29"] Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.844292 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-csg29"] Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.858729 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kspv4"] Nov 25 14:42:57 crc kubenswrapper[4796]: I1125 14:42:57.864489 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kspv4"] Nov 25 14:42:57 crc kubenswrapper[4796]: E1125 14:42:57.951286 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdba9c40_fa3e_4234_8217_5ea48d209af0.slice/crio-e040ecd14414917c2887a6c54ef2150b8fc1f5163eba25099c6f846bf16cc2ee\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f24e5a8_7011_4fc1_97be_6e0bdaf41fc2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f24e5a8_7011_4fc1_97be_6e0bdaf41fc2.slice/crio-cad390ab6174722b8b8885393f6bf4df041084cdc892c54eed92a540053f52d5\": RecentStats: unable to find data in memory cache]" Nov 25 14:42:58 crc kubenswrapper[4796]: I1125 14:42:58.420283 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2" path="/var/lib/kubelet/pods/0f24e5a8-7011-4fc1-97be-6e0bdaf41fc2/volumes" Nov 25 14:42:58 crc kubenswrapper[4796]: I1125 14:42:58.420699 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdba9c40-fa3e-4234-8217-5ea48d209af0" path="/var/lib/kubelet/pods/cdba9c40-fa3e-4234-8217-5ea48d209af0/volumes" Nov 25 14:43:04 crc kubenswrapper[4796]: I1125 14:43:04.832330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bcptz" event={"ID":"130773d9-cc1a-46d3-91a4-1880735e0351","Type":"ContainerStarted","Data":"dcfb1aed184eb9d6c7d0f41dec8a1092773ec2479141d1b1ea09ee61c4dcabeb"} Nov 25 14:43:04 crc kubenswrapper[4796]: I1125 14:43:04.834367 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jftkt" event={"ID":"9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718","Type":"ContainerStarted","Data":"85e34e2cc4447de7ce6fedfe0cea1b2f4eedbb3f6bb518c1fb12bb18dde96528"} Nov 25 14:43:04 crc kubenswrapper[4796]: I1125 14:43:04.834592 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jftkt" Nov 25 14:43:04 crc kubenswrapper[4796]: I1125 14:43:04.837251 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"241f82db-29d5-4cb8-bd81-3e758b9cd855","Type":"ContainerStarted","Data":"64a35f855fab2c5cd5fdae7a83588908a9438566ccada929435ac0a45c869698"} Nov 25 14:43:04 crc kubenswrapper[4796]: I1125 14:43:04.837402 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 14:43:04 crc kubenswrapper[4796]: I1125 14:43:04.838987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fba50302-0f98-4117-ae49-f710e1543e98","Type":"ContainerStarted","Data":"e68f5a171eef6db51f64bc556e8682e8675ad0d972e8582a54017c96773ea645"} Nov 25 14:43:04 crc kubenswrapper[4796]: I1125 14:43:04.879045 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.442592057 podStartE2EDuration="20.879027631s" podCreationTimestamp="2025-11-25 14:42:44 +0000 UTC" firstStartedPulling="2025-11-25 14:42:55.868746103 +0000 UTC m=+1104.211855527" lastFinishedPulling="2025-11-25 14:43:03.305181667 +0000 UTC m=+1111.648291101" observedRunningTime="2025-11-25 14:43:04.875214703 +0000 UTC m=+1113.218324137" watchObservedRunningTime="2025-11-25 14:43:04.879027631 +0000 UTC m=+1113.222137055" Nov 25 14:43:04 crc kubenswrapper[4796]: I1125 14:43:04.902549 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jftkt" podStartSLOduration=7.450299875 podStartE2EDuration="14.902522815s" podCreationTimestamp="2025-11-25 14:42:50 +0000 UTC" firstStartedPulling="2025-11-25 14:42:55.445263171 +0000 UTC m=+1103.788372595" lastFinishedPulling="2025-11-25 14:43:02.897486101 +0000 UTC m=+1111.240595535" observedRunningTime="2025-11-25 14:43:04.890745672 +0000 UTC m=+1113.233855116" watchObservedRunningTime="2025-11-25 14:43:04.902522815 +0000 UTC m=+1113.245632239" Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.845462 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"93d132ee-a59b-4244-8d56-895b7a49b14d","Type":"ContainerStarted","Data":"c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048"} Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.846690 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.848062 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"25a388f4-cd5a-404d-a777-46f4410e0b3a","Type":"ContainerStarted","Data":"0a8061bf6daa14dc0068f55d99a50456b359e2706dee5ee95f44da08aef2db1a"} Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.850173 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1729cee4-39e5-4e3c-90ed-51b16a110a6a","Type":"ContainerStarted","Data":"8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d"} Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.852228 4796 generic.go:334] "Generic (PLEG): container finished" podID="e67870b3-2007-43e4-86cc-d4e4153c3e15" containerID="103302dc9479807416e23beaae522cca418190c9096589e958f95addfa408798" exitCode=0 Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.852289 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" event={"ID":"e67870b3-2007-43e4-86cc-d4e4153c3e15","Type":"ContainerDied","Data":"103302dc9479807416e23beaae522cca418190c9096589e958f95addfa408798"} Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.854131 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1c9e8c13-5a24-4394-bdc8-aa4965e931b8","Type":"ContainerStarted","Data":"6d3b4ade7c655966894ca600a2b7fba33fc06962d8446e4e02f411377e3a5b18"} Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.855731 4796 generic.go:334] "Generic (PLEG): container finished" podID="a0e1037c-c549-4fcf-9c16-13721a1b8bd3" containerID="77ac4f288735593a44ad0bef6e4d90ed860b2b39f8e3da991f67e4bbed77aab4" exitCode=0 Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.855767 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" event={"ID":"a0e1037c-c549-4fcf-9c16-13721a1b8bd3","Type":"ContainerDied","Data":"77ac4f288735593a44ad0bef6e4d90ed860b2b39f8e3da991f67e4bbed77aab4"} Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.857051 4796 generic.go:334] "Generic (PLEG): container finished" podID="130773d9-cc1a-46d3-91a4-1880735e0351" containerID="dcfb1aed184eb9d6c7d0f41dec8a1092773ec2479141d1b1ea09ee61c4dcabeb" exitCode=0 Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.857122 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bcptz" event={"ID":"130773d9-cc1a-46d3-91a4-1880735e0351","Type":"ContainerDied","Data":"dcfb1aed184eb9d6c7d0f41dec8a1092773ec2479141d1b1ea09ee61c4dcabeb"} Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.860310 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064","Type":"ContainerStarted","Data":"8be4441f689c60e1ff35c5f3aef889283f203f33e821572a80722f8d6d78a9d9"} Nov 25 14:43:05 crc kubenswrapper[4796]: I1125 14:43:05.876090 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.322085334 podStartE2EDuration="19.876070035s" podCreationTimestamp="2025-11-25 14:42:46 +0000 UTC" firstStartedPulling="2025-11-25 14:42:55.848990885 +0000 UTC m=+1104.192100309" lastFinishedPulling="2025-11-25 14:43:05.402975586 +0000 UTC m=+1113.746085010" observedRunningTime="2025-11-25 14:43:05.868913785 +0000 UTC m=+1114.212023209" watchObservedRunningTime="2025-11-25 14:43:05.876070035 +0000 UTC m=+1114.219179459" Nov 25 14:43:05 crc kubenswrapper[4796]: E1125 14:43:05.972242 4796 mount_linux.go:282] Mount failed: exit status 32 Nov 25 14:43:05 crc kubenswrapper[4796]: Mounting command: mount Nov 25 14:43:05 crc kubenswrapper[4796]: Mounting arguments: --no-canonicalize -o bind /proc/4796/fd/26 /var/lib/kubelet/pods/e67870b3-2007-43e4-86cc-d4e4153c3e15/volume-subpaths/dns-svc/dnsmasq-dns/1 Nov 25 14:43:05 crc kubenswrapper[4796]: Output: mount: /var/lib/kubelet/pods/e67870b3-2007-43e4-86cc-d4e4153c3e15/volume-subpaths/dns-svc/dnsmasq-dns/1: mount(2) system call failed: No such file or directory. Nov 25 14:43:06 crc kubenswrapper[4796]: E1125 14:43:06.387187 4796 kubelet_pods.go:349] "Failed to prepare subPath for volumeMount of the container" err=< Nov 25 14:43:06 crc kubenswrapper[4796]: error mounting /var/lib/kubelet/pods/e67870b3-2007-43e4-86cc-d4e4153c3e15/volumes/kubernetes.io~configmap/dns-svc/..2025_11_25_14_42_40.1270815981/dns-svc: mount failed: exit status 32 Nov 25 14:43:06 crc kubenswrapper[4796]: Mounting command: mount Nov 25 14:43:06 crc kubenswrapper[4796]: Mounting arguments: --no-canonicalize -o bind /proc/4796/fd/26 /var/lib/kubelet/pods/e67870b3-2007-43e4-86cc-d4e4153c3e15/volume-subpaths/dns-svc/dnsmasq-dns/1 Nov 25 14:43:06 crc kubenswrapper[4796]: Output: mount: /var/lib/kubelet/pods/e67870b3-2007-43e4-86cc-d4e4153c3e15/volume-subpaths/dns-svc/dnsmasq-dns/1: mount(2) system call failed: No such file or directory. Nov 25 14:43:06 crc kubenswrapper[4796]: > containerName="dnsmasq-dns" volumeMountName="dns-svc" Nov 25 14:43:06 crc kubenswrapper[4796]: E1125 14:43:06.387566 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6xt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zv5xz_openstack(e67870b3-2007-43e4-86cc-d4e4153c3e15): CreateContainerConfigError: failed to prepare subPath for volumeMount \"dns-svc\" of container \"dnsmasq-dns\"" logger="UnhandledError" Nov 25 14:43:06 crc kubenswrapper[4796]: E1125 14:43:06.388853 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerConfigError: \"failed to prepare subPath for volumeMount \\\"dns-svc\\\" of container \\\"dnsmasq-dns\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" podUID="e67870b3-2007-43e4-86cc-d4e4153c3e15" Nov 25 14:43:06 crc kubenswrapper[4796]: I1125 14:43:06.872424 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" event={"ID":"a0e1037c-c549-4fcf-9c16-13721a1b8bd3","Type":"ContainerStarted","Data":"0de390c89321413daf17fa6c7dc3aae0b1b003423e90032b984bac96e3b67ef0"} Nov 25 14:43:06 crc kubenswrapper[4796]: I1125 14:43:06.872888 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:43:06 crc kubenswrapper[4796]: I1125 14:43:06.878668 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bcptz" event={"ID":"130773d9-cc1a-46d3-91a4-1880735e0351","Type":"ContainerStarted","Data":"0cd6511c877eaac906efb184fb58c093c6172dfbfe9da67db5d1e1da315b4f5b"} Nov 25 14:43:06 crc kubenswrapper[4796]: I1125 14:43:06.878743 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bcptz" event={"ID":"130773d9-cc1a-46d3-91a4-1880735e0351","Type":"ContainerStarted","Data":"f04d7dede8fbe10499a79c2a70a95f75c792b1bb1ef611afbf4a99e1f9cb61e6"} Nov 25 14:43:06 crc kubenswrapper[4796]: I1125 14:43:06.879380 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:43:06 crc kubenswrapper[4796]: I1125 14:43:06.879414 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:43:06 crc kubenswrapper[4796]: I1125 14:43:06.882676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df357d5a-93ca-48cc-bcec-b01ba247136e","Type":"ContainerStarted","Data":"2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c"} Nov 25 14:43:06 crc kubenswrapper[4796]: I1125 14:43:06.896091 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" podStartSLOduration=19.871784406 podStartE2EDuration="26.896072605s" podCreationTimestamp="2025-11-25 14:42:40 +0000 UTC" firstStartedPulling="2025-11-25 14:42:55.874659561 +0000 UTC m=+1104.217768985" lastFinishedPulling="2025-11-25 14:43:02.89894773 +0000 UTC m=+1111.242057184" observedRunningTime="2025-11-25 14:43:06.890471198 +0000 UTC m=+1115.233580642" watchObservedRunningTime="2025-11-25 14:43:06.896072605 +0000 UTC m=+1115.239182029" Nov 25 14:43:06 crc kubenswrapper[4796]: I1125 14:43:06.914852 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bcptz" podStartSLOduration=13.520828543 podStartE2EDuration="16.914834351s" podCreationTimestamp="2025-11-25 14:42:50 +0000 UTC" firstStartedPulling="2025-11-25 14:42:55.950189601 +0000 UTC m=+1104.293299025" lastFinishedPulling="2025-11-25 14:42:59.344195409 +0000 UTC m=+1107.687304833" observedRunningTime="2025-11-25 14:43:06.908141517 +0000 UTC m=+1115.251250971" watchObservedRunningTime="2025-11-25 14:43:06.914834351 +0000 UTC m=+1115.257943765" Nov 25 14:43:07 crc kubenswrapper[4796]: I1125 14:43:07.891007 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" event={"ID":"e67870b3-2007-43e4-86cc-d4e4153c3e15","Type":"ContainerStarted","Data":"ac3a5b5966c388902eecf866ea31e51f7695f3ba5e3ef40065d6658bef062d23"} Nov 25 14:43:07 crc kubenswrapper[4796]: I1125 14:43:07.910231 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" podStartSLOduration=21.599655976 podStartE2EDuration="28.910213519s" podCreationTimestamp="2025-11-25 14:42:39 +0000 UTC" firstStartedPulling="2025-11-25 14:42:55.899965755 +0000 UTC m=+1104.243075179" lastFinishedPulling="2025-11-25 14:43:03.210523298 +0000 UTC m=+1111.553632722" observedRunningTime="2025-11-25 14:43:07.908013486 +0000 UTC m=+1116.251122920" watchObservedRunningTime="2025-11-25 14:43:07.910213519 +0000 UTC m=+1116.253322933" Nov 25 14:43:09 crc kubenswrapper[4796]: I1125 14:43:09.883355 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 14:43:09 crc kubenswrapper[4796]: I1125 14:43:09.929921 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064","Type":"ContainerStarted","Data":"67cd801efa727a715a0e8216d12b6690ee97e28690e882a406a0b9d829acf81e"} Nov 25 14:43:09 crc kubenswrapper[4796]: I1125 14:43:09.935320 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1c9e8c13-5a24-4394-bdc8-aa4965e931b8","Type":"ContainerStarted","Data":"3c7c9c459b693e977f1c82fbf1c41e1fef8b444898b5ef50372ac31e69b445aa"} Nov 25 14:43:09 crc kubenswrapper[4796]: I1125 14:43:09.958991 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.970480684 podStartE2EDuration="18.958964751s" podCreationTimestamp="2025-11-25 14:42:51 +0000 UTC" firstStartedPulling="2025-11-25 14:42:57.400877225 +0000 UTC m=+1105.743986649" lastFinishedPulling="2025-11-25 14:43:09.389361252 +0000 UTC m=+1117.732470716" observedRunningTime="2025-11-25 14:43:09.95354092 +0000 UTC m=+1118.296650354" watchObservedRunningTime="2025-11-25 14:43:09.958964751 +0000 UTC m=+1118.302074175" Nov 25 14:43:09 crc kubenswrapper[4796]: I1125 14:43:09.988911 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.027795558 podStartE2EDuration="19.988890101s" podCreationTimestamp="2025-11-25 14:42:50 +0000 UTC" firstStartedPulling="2025-11-25 14:42:57.396943934 +0000 UTC m=+1105.740053368" lastFinishedPulling="2025-11-25 14:43:09.358038467 +0000 UTC m=+1117.701147911" observedRunningTime="2025-11-25 14:43:09.981294556 +0000 UTC m=+1118.324403980" watchObservedRunningTime="2025-11-25 14:43:09.988890101 +0000 UTC m=+1118.331999535" Nov 25 14:43:10 crc kubenswrapper[4796]: I1125 14:43:10.188442 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:43:10 crc kubenswrapper[4796]: I1125 14:43:10.943502 4796 generic.go:334] "Generic (PLEG): container finished" podID="fba50302-0f98-4117-ae49-f710e1543e98" containerID="e68f5a171eef6db51f64bc556e8682e8675ad0d972e8582a54017c96773ea645" exitCode=0 Nov 25 14:43:10 crc kubenswrapper[4796]: I1125 14:43:10.943560 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fba50302-0f98-4117-ae49-f710e1543e98","Type":"ContainerDied","Data":"e68f5a171eef6db51f64bc556e8682e8675ad0d972e8582a54017c96773ea645"} Nov 25 14:43:11 crc kubenswrapper[4796]: I1125 14:43:11.303956 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 14:43:11 crc kubenswrapper[4796]: I1125 14:43:11.353055 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 14:43:11 crc kubenswrapper[4796]: I1125 14:43:11.418892 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 14:43:11 crc kubenswrapper[4796]: I1125 14:43:11.958338 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.012546 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.283354 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zv5xz"] Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.283689 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" podUID="e67870b3-2007-43e4-86cc-d4e4153c3e15" containerName="dnsmasq-dns" containerID="cri-o://ac3a5b5966c388902eecf866ea31e51f7695f3ba5e3ef40065d6658bef062d23" gracePeriod=10 Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.285820 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.319694 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-m4j6w"] Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.338900 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.358232 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.429090 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-t8mfd"] Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.430690 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-m4j6w"] Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.430720 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.430853 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.430954 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t8mfd"] Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.433929 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.474634 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-config\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.474712 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.475190 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.475280 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7sfh\" (UniqueName: \"kubernetes.io/projected/f8826632-6e92-47a0-80ed-2a08f466b851-kube-api-access-q7sfh\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.475887 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.577070 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5d31b742-a284-4a5f-a151-2ee4077a3071-ovn-rundir\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.577137 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d31b742-a284-4a5f-a151-2ee4077a3071-config\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.577253 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qghv\" (UniqueName: \"kubernetes.io/projected/5d31b742-a284-4a5f-a151-2ee4077a3071-kube-api-access-9qghv\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.577293 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-config\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.577390 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.577534 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.577765 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d31b742-a284-4a5f-a151-2ee4077a3071-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.578417 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.578503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-config\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.578514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.578600 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5d31b742-a284-4a5f-a151-2ee4077a3071-ovs-rundir\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.578673 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d31b742-a284-4a5f-a151-2ee4077a3071-combined-ca-bundle\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.578708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7sfh\" (UniqueName: \"kubernetes.io/projected/f8826632-6e92-47a0-80ed-2a08f466b851-kube-api-access-q7sfh\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.602015 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7sfh\" (UniqueName: \"kubernetes.io/projected/f8826632-6e92-47a0-80ed-2a08f466b851-kube-api-access-q7sfh\") pod \"dnsmasq-dns-7fd796d7df-m4j6w\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.605377 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kcw2g"] Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.605730 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" podUID="a0e1037c-c549-4fcf-9c16-13721a1b8bd3" containerName="dnsmasq-dns" containerID="cri-o://0de390c89321413daf17fa6c7dc3aae0b1b003423e90032b984bac96e3b67ef0" gracePeriod=10 Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.606705 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.650863 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-flfsz"] Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.652315 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.654263 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.671297 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-flfsz"] Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.686466 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d31b742-a284-4a5f-a151-2ee4077a3071-combined-ca-bundle\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.686523 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5d31b742-a284-4a5f-a151-2ee4077a3071-ovn-rundir\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.686546 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d31b742-a284-4a5f-a151-2ee4077a3071-config\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.686630 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qghv\" (UniqueName: \"kubernetes.io/projected/5d31b742-a284-4a5f-a151-2ee4077a3071-kube-api-access-9qghv\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.686682 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d31b742-a284-4a5f-a151-2ee4077a3071-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.686711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5d31b742-a284-4a5f-a151-2ee4077a3071-ovs-rundir\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.687316 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5d31b742-a284-4a5f-a151-2ee4077a3071-ovs-rundir\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.688103 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d31b742-a284-4a5f-a151-2ee4077a3071-config\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.688441 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5d31b742-a284-4a5f-a151-2ee4077a3071-ovn-rundir\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.695270 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d31b742-a284-4a5f-a151-2ee4077a3071-combined-ca-bundle\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.704110 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qghv\" (UniqueName: \"kubernetes.io/projected/5d31b742-a284-4a5f-a151-2ee4077a3071-kube-api-access-9qghv\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.707246 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d31b742-a284-4a5f-a151-2ee4077a3071-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t8mfd\" (UID: \"5d31b742-a284-4a5f-a151-2ee4077a3071\") " pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.750860 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.765096 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t8mfd" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.789993 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-config\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.790077 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.790170 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.790387 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5lmv\" (UniqueName: \"kubernetes.io/projected/e1d37852-53c0-4b9f-8089-a89a96d82753-kube-api-access-s5lmv\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.790467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.891261 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.891313 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5lmv\" (UniqueName: \"kubernetes.io/projected/e1d37852-53c0-4b9f-8089-a89a96d82753-kube-api-access-s5lmv\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.891344 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.891388 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-config\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.891438 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.892328 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.892809 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.892925 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-config\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.893483 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.915920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5lmv\" (UniqueName: \"kubernetes.io/projected/e1d37852-53c0-4b9f-8089-a89a96d82753-kube-api-access-s5lmv\") pod \"dnsmasq-dns-86db49b7ff-flfsz\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.967495 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.971947 4796 generic.go:334] "Generic (PLEG): container finished" podID="e67870b3-2007-43e4-86cc-d4e4153c3e15" containerID="ac3a5b5966c388902eecf866ea31e51f7695f3ba5e3ef40065d6658bef062d23" exitCode=0 Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.972018 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" event={"ID":"e67870b3-2007-43e4-86cc-d4e4153c3e15","Type":"ContainerDied","Data":"ac3a5b5966c388902eecf866ea31e51f7695f3ba5e3ef40065d6658bef062d23"} Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.973857 4796 generic.go:334] "Generic (PLEG): container finished" podID="a0e1037c-c549-4fcf-9c16-13721a1b8bd3" containerID="0de390c89321413daf17fa6c7dc3aae0b1b003423e90032b984bac96e3b67ef0" exitCode=0 Nov 25 14:43:12 crc kubenswrapper[4796]: I1125 14:43:12.973999 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" event={"ID":"a0e1037c-c549-4fcf-9c16-13721a1b8bd3","Type":"ContainerDied","Data":"0de390c89321413daf17fa6c7dc3aae0b1b003423e90032b984bac96e3b67ef0"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.041222 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t8mfd"] Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.042216 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.200077 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-m4j6w"] Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.212803 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.214646 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.217318 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-q8rtp" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.219861 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.220041 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.222743 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.256105 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.367045 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.403710 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5336ecd-5d7e-4b73-b2a7-d289b8578641-scripts\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.403759 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5336ecd-5d7e-4b73-b2a7-d289b8578641-config\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.403798 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5336ecd-5d7e-4b73-b2a7-d289b8578641-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.403870 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5336ecd-5d7e-4b73-b2a7-d289b8578641-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.403921 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsvz\" (UniqueName: \"kubernetes.io/projected/b5336ecd-5d7e-4b73-b2a7-d289b8578641-kube-api-access-4lsvz\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.403942 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b5336ecd-5d7e-4b73-b2a7-d289b8578641-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.403970 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5336ecd-5d7e-4b73-b2a7-d289b8578641-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.504817 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-dns-svc\") pod \"e67870b3-2007-43e4-86cc-d4e4153c3e15\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.504929 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6xt4\" (UniqueName: \"kubernetes.io/projected/e67870b3-2007-43e4-86cc-d4e4153c3e15-kube-api-access-p6xt4\") pod \"e67870b3-2007-43e4-86cc-d4e4153c3e15\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.505020 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-config\") pod \"e67870b3-2007-43e4-86cc-d4e4153c3e15\" (UID: \"e67870b3-2007-43e4-86cc-d4e4153c3e15\") " Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.505268 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lsvz\" (UniqueName: \"kubernetes.io/projected/b5336ecd-5d7e-4b73-b2a7-d289b8578641-kube-api-access-4lsvz\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.505297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b5336ecd-5d7e-4b73-b2a7-d289b8578641-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.505329 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5336ecd-5d7e-4b73-b2a7-d289b8578641-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.505357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5336ecd-5d7e-4b73-b2a7-d289b8578641-scripts\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.505400 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5336ecd-5d7e-4b73-b2a7-d289b8578641-config\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.505426 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5336ecd-5d7e-4b73-b2a7-d289b8578641-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.505479 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5336ecd-5d7e-4b73-b2a7-d289b8578641-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.508687 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5336ecd-5d7e-4b73-b2a7-d289b8578641-scripts\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.509210 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5336ecd-5d7e-4b73-b2a7-d289b8578641-config\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.509755 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67870b3-2007-43e4-86cc-d4e4153c3e15-kube-api-access-p6xt4" (OuterVolumeSpecName: "kube-api-access-p6xt4") pod "e67870b3-2007-43e4-86cc-d4e4153c3e15" (UID: "e67870b3-2007-43e4-86cc-d4e4153c3e15"). InnerVolumeSpecName "kube-api-access-p6xt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.511400 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b5336ecd-5d7e-4b73-b2a7-d289b8578641-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.524027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5336ecd-5d7e-4b73-b2a7-d289b8578641-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.528714 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5336ecd-5d7e-4b73-b2a7-d289b8578641-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.536213 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lsvz\" (UniqueName: \"kubernetes.io/projected/b5336ecd-5d7e-4b73-b2a7-d289b8578641-kube-api-access-4lsvz\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.541169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5336ecd-5d7e-4b73-b2a7-d289b8578641-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b5336ecd-5d7e-4b73-b2a7-d289b8578641\") " pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.558318 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-flfsz"] Nov 25 14:43:13 crc kubenswrapper[4796]: W1125 14:43:13.569566 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1d37852_53c0_4b9f_8089_a89a96d82753.slice/crio-5beab9c5dc6d68459391d5dcfd2aeebad845bab956e0c500c9bfc2bc449565f4 WatchSource:0}: Error finding container 5beab9c5dc6d68459391d5dcfd2aeebad845bab956e0c500c9bfc2bc449565f4: Status 404 returned error can't find the container with id 5beab9c5dc6d68459391d5dcfd2aeebad845bab956e0c500c9bfc2bc449565f4 Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.571383 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.572095 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e67870b3-2007-43e4-86cc-d4e4153c3e15" (UID: "e67870b3-2007-43e4-86cc-d4e4153c3e15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.600002 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-config" (OuterVolumeSpecName: "config") pod "e67870b3-2007-43e4-86cc-d4e4153c3e15" (UID: "e67870b3-2007-43e4-86cc-d4e4153c3e15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.607471 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.607496 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6xt4\" (UniqueName: \"kubernetes.io/projected/e67870b3-2007-43e4-86cc-d4e4153c3e15-kube-api-access-p6xt4\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.607506 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e67870b3-2007-43e4-86cc-d4e4153c3e15-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.694063 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.819302 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr2vm\" (UniqueName: \"kubernetes.io/projected/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-kube-api-access-tr2vm\") pod \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.819399 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-dns-svc\") pod \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.819442 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-config\") pod \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\" (UID: \"a0e1037c-c549-4fcf-9c16-13721a1b8bd3\") " Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.876917 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-kube-api-access-tr2vm" (OuterVolumeSpecName: "kube-api-access-tr2vm") pod "a0e1037c-c549-4fcf-9c16-13721a1b8bd3" (UID: "a0e1037c-c549-4fcf-9c16-13721a1b8bd3"). InnerVolumeSpecName "kube-api-access-tr2vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.921567 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr2vm\" (UniqueName: \"kubernetes.io/projected/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-kube-api-access-tr2vm\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.928160 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0e1037c-c549-4fcf-9c16-13721a1b8bd3" (UID: "a0e1037c-c549-4fcf-9c16-13721a1b8bd3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.943901 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-config" (OuterVolumeSpecName: "config") pod "a0e1037c-c549-4fcf-9c16-13721a1b8bd3" (UID: "a0e1037c-c549-4fcf-9c16-13721a1b8bd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.983200 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t8mfd" event={"ID":"5d31b742-a284-4a5f-a151-2ee4077a3071","Type":"ContainerStarted","Data":"79117f69d543bf50a87c55aebf8c1868c8cc161a7531194cb2f7a9c6e1afcbdd"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.983270 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t8mfd" event={"ID":"5d31b742-a284-4a5f-a151-2ee4077a3071","Type":"ContainerStarted","Data":"c3c81004b6524e56f3f7de3fb3d1f822d854b8311c204a2bd4e81cd1851a7be9"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.984838 4796 generic.go:334] "Generic (PLEG): container finished" podID="f8826632-6e92-47a0-80ed-2a08f466b851" containerID="804e7316d8bcf7219943ab3392898a5794b1d719520291f33e9b74d61c3b4689" exitCode=0 Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.984978 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" event={"ID":"f8826632-6e92-47a0-80ed-2a08f466b851","Type":"ContainerDied","Data":"804e7316d8bcf7219943ab3392898a5794b1d719520291f33e9b74d61c3b4689"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.985015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" event={"ID":"f8826632-6e92-47a0-80ed-2a08f466b851","Type":"ContainerStarted","Data":"18d335a29f798c411e1d5c3d6efbd676af4cb45eb5f6ea19ff2db0f1ce451969"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.987331 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" event={"ID":"e67870b3-2007-43e4-86cc-d4e4153c3e15","Type":"ContainerDied","Data":"a790560c293f2e96d784a99fefbb99ed6c83b119af7faa0ba2c9ecdfdc84ff2d"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.987410 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zv5xz" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.987422 4796 scope.go:117] "RemoveContainer" containerID="ac3a5b5966c388902eecf866ea31e51f7695f3ba5e3ef40065d6658bef062d23" Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.991046 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" event={"ID":"e1d37852-53c0-4b9f-8089-a89a96d82753","Type":"ContainerStarted","Data":"a8ee0c095d2ca6bcc422fe8b5c8092f28a93db996c956483868ed783e2913da9"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.991268 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" event={"ID":"e1d37852-53c0-4b9f-8089-a89a96d82753","Type":"ContainerStarted","Data":"5beab9c5dc6d68459391d5dcfd2aeebad845bab956e0c500c9bfc2bc449565f4"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.995439 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fba50302-0f98-4117-ae49-f710e1543e98","Type":"ContainerStarted","Data":"431f43d8b656d829305901a259c8c7ae55dc91e1d53e9ae1be0b23d862b8ccd9"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.998251 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" event={"ID":"a0e1037c-c549-4fcf-9c16-13721a1b8bd3","Type":"ContainerDied","Data":"1bbceb00d1bc81a6772148a90c606ec1b9eeefb6ddac21b415697743b5d24290"} Nov 25 14:43:13 crc kubenswrapper[4796]: I1125 14:43:13.998283 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kcw2g" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.002162 4796 generic.go:334] "Generic (PLEG): container finished" podID="25a388f4-cd5a-404d-a777-46f4410e0b3a" containerID="0a8061bf6daa14dc0068f55d99a50456b359e2706dee5ee95f44da08aef2db1a" exitCode=0 Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.003032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"25a388f4-cd5a-404d-a777-46f4410e0b3a","Type":"ContainerDied","Data":"0a8061bf6daa14dc0068f55d99a50456b359e2706dee5ee95f44da08aef2db1a"} Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.011723 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-t8mfd" podStartSLOduration=2.011699372 podStartE2EDuration="2.011699372s" podCreationTimestamp="2025-11-25 14:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:43:14.001228253 +0000 UTC m=+1122.344337687" watchObservedRunningTime="2025-11-25 14:43:14.011699372 +0000 UTC m=+1122.354808826" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.012798 4796 scope.go:117] "RemoveContainer" containerID="103302dc9479807416e23beaae522cca418190c9096589e958f95addfa408798" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.023700 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.023741 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e1037c-c549-4fcf-9c16-13721a1b8bd3-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.042522 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.732845168 podStartE2EDuration="31.04250263s" podCreationTimestamp="2025-11-25 14:42:43 +0000 UTC" firstStartedPulling="2025-11-25 14:42:55.901346672 +0000 UTC m=+1104.244456096" lastFinishedPulling="2025-11-25 14:43:03.211004134 +0000 UTC m=+1111.554113558" observedRunningTime="2025-11-25 14:43:14.036757429 +0000 UTC m=+1122.379866853" watchObservedRunningTime="2025-11-25 14:43:14.04250263 +0000 UTC m=+1122.385612054" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.072684 4796 scope.go:117] "RemoveContainer" containerID="0de390c89321413daf17fa6c7dc3aae0b1b003423e90032b984bac96e3b67ef0" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.105642 4796 scope.go:117] "RemoveContainer" containerID="77ac4f288735593a44ad0bef6e4d90ed860b2b39f8e3da991f67e4bbed77aab4" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.139825 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kcw2g"] Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.149715 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kcw2g"] Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.155686 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zv5xz"] Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.163710 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.171741 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zv5xz"] Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.420487 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e1037c-c549-4fcf-9c16-13721a1b8bd3" path="/var/lib/kubelet/pods/a0e1037c-c549-4fcf-9c16-13721a1b8bd3/volumes" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.421629 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67870b3-2007-43e4-86cc-d4e4153c3e15" path="/var/lib/kubelet/pods/e67870b3-2007-43e4-86cc-d4e4153c3e15/volumes" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.424394 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 14:43:14 crc kubenswrapper[4796]: I1125 14:43:14.424548 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 14:43:15 crc kubenswrapper[4796]: I1125 14:43:15.021876 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"25a388f4-cd5a-404d-a777-46f4410e0b3a","Type":"ContainerStarted","Data":"8918c2b8f4951c90242d09d281a71b2910af8c7ed499ebc78c6d849c83da1810"} Nov 25 14:43:15 crc kubenswrapper[4796]: I1125 14:43:15.025352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b5336ecd-5d7e-4b73-b2a7-d289b8578641","Type":"ContainerStarted","Data":"cce0f49c322b2cff316cefe92b7d7260c0de225ea859eb56e7a4bb998383eca1"} Nov 25 14:43:15 crc kubenswrapper[4796]: I1125 14:43:15.034599 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" event={"ID":"f8826632-6e92-47a0-80ed-2a08f466b851","Type":"ContainerStarted","Data":"eac2719df3e4b344dd4fef490411183a5501e31f022c1995ada4736faedf98be"} Nov 25 14:43:15 crc kubenswrapper[4796]: I1125 14:43:15.044293 4796 generic.go:334] "Generic (PLEG): container finished" podID="e1d37852-53c0-4b9f-8089-a89a96d82753" containerID="a8ee0c095d2ca6bcc422fe8b5c8092f28a93db996c956483868ed783e2913da9" exitCode=0 Nov 25 14:43:15 crc kubenswrapper[4796]: I1125 14:43:15.044339 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" event={"ID":"e1d37852-53c0-4b9f-8089-a89a96d82753","Type":"ContainerDied","Data":"a8ee0c095d2ca6bcc422fe8b5c8092f28a93db996c956483868ed783e2913da9"} Nov 25 14:43:15 crc kubenswrapper[4796]: I1125 14:43:15.051787 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.328634511 podStartE2EDuration="34.051767872s" podCreationTimestamp="2025-11-25 14:42:41 +0000 UTC" firstStartedPulling="2025-11-25 14:42:55.884843771 +0000 UTC m=+1104.227953195" lastFinishedPulling="2025-11-25 14:43:03.607977132 +0000 UTC m=+1111.951086556" observedRunningTime="2025-11-25 14:43:15.047962805 +0000 UTC m=+1123.391072229" watchObservedRunningTime="2025-11-25 14:43:15.051767872 +0000 UTC m=+1123.394877296" Nov 25 14:43:15 crc kubenswrapper[4796]: I1125 14:43:15.105233 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" podStartSLOduration=3.105213885 podStartE2EDuration="3.105213885s" podCreationTimestamp="2025-11-25 14:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:43:15.10323913 +0000 UTC m=+1123.446348574" watchObservedRunningTime="2025-11-25 14:43:15.105213885 +0000 UTC m=+1123.448323309" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.067708 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" event={"ID":"e1d37852-53c0-4b9f-8089-a89a96d82753","Type":"ContainerStarted","Data":"9e901cdcce41c4c4fd098fc4e161e3b082bbce473ba061595a1a3944f756716c"} Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.068892 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.098192 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" podStartSLOduration=4.098176364 podStartE2EDuration="4.098176364s" podCreationTimestamp="2025-11-25 14:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:43:16.089515395 +0000 UTC m=+1124.432624839" watchObservedRunningTime="2025-11-25 14:43:16.098176364 +0000 UTC m=+1124.441285788" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.826869 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.911824 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-m4j6w"] Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.952362 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-nmsz6"] Nov 25 14:43:16 crc kubenswrapper[4796]: E1125 14:43:16.952731 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e1037c-c549-4fcf-9c16-13721a1b8bd3" containerName="init" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.952748 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e1037c-c549-4fcf-9c16-13721a1b8bd3" containerName="init" Nov 25 14:43:16 crc kubenswrapper[4796]: E1125 14:43:16.952759 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67870b3-2007-43e4-86cc-d4e4153c3e15" containerName="dnsmasq-dns" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.952766 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67870b3-2007-43e4-86cc-d4e4153c3e15" containerName="dnsmasq-dns" Nov 25 14:43:16 crc kubenswrapper[4796]: E1125 14:43:16.952780 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e1037c-c549-4fcf-9c16-13721a1b8bd3" containerName="dnsmasq-dns" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.952786 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e1037c-c549-4fcf-9c16-13721a1b8bd3" containerName="dnsmasq-dns" Nov 25 14:43:16 crc kubenswrapper[4796]: E1125 14:43:16.952794 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67870b3-2007-43e4-86cc-d4e4153c3e15" containerName="init" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.952799 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67870b3-2007-43e4-86cc-d4e4153c3e15" containerName="init" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.952975 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e1037c-c549-4fcf-9c16-13721a1b8bd3" containerName="dnsmasq-dns" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.952994 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67870b3-2007-43e4-86cc-d4e4153c3e15" containerName="dnsmasq-dns" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.954068 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.964779 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nmsz6"] Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.974815 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.974859 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-config\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.974908 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxlqw\" (UniqueName: \"kubernetes.io/projected/099992bc-6139-4064-b84d-7f9c319026d9-kube-api-access-jxlqw\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.974927 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:16 crc kubenswrapper[4796]: I1125 14:43:16.974972 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-dns-svc\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.074444 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.075708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-dns-svc\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.075855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.075889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-config\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.076553 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-dns-svc\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.076692 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-config\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.076767 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.076918 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxlqw\" (UniqueName: \"kubernetes.io/projected/099992bc-6139-4064-b84d-7f9c319026d9-kube-api-access-jxlqw\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.076954 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.077547 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.102417 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxlqw\" (UniqueName: \"kubernetes.io/projected/099992bc-6139-4064-b84d-7f9c319026d9-kube-api-access-jxlqw\") pod \"dnsmasq-dns-698758b865-nmsz6\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.281732 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:17 crc kubenswrapper[4796]: I1125 14:43:17.696462 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nmsz6"] Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.074977 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.081933 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.084487 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.085109 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rngn2" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.085251 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.085501 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" podUID="f8826632-6e92-47a0-80ed-2a08f466b851" containerName="dnsmasq-dns" containerID="cri-o://eac2719df3e4b344dd4fef490411183a5501e31f022c1995ada4736faedf98be" gracePeriod=10 Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.085704 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nmsz6" event={"ID":"099992bc-6139-4064-b84d-7f9c319026d9","Type":"ContainerStarted","Data":"e9334234c732ddac739ebc4120dcdaf7559b31eb390f4a80ea3ef14fc523a461"} Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.086543 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.128316 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.195259 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/49501e2a-5ad0-4de7-9b98-510c0c55863f-cache\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.195293 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/49501e2a-5ad0-4de7-9b98-510c0c55863f-lock\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.195315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4458k\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-kube-api-access-4458k\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.195358 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.195452 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.296401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/49501e2a-5ad0-4de7-9b98-510c0c55863f-cache\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.296437 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/49501e2a-5ad0-4de7-9b98-510c0c55863f-lock\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.296461 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4458k\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-kube-api-access-4458k\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.296515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.296631 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.296912 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.297793 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/49501e2a-5ad0-4de7-9b98-510c0c55863f-lock\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.298198 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/49501e2a-5ad0-4de7-9b98-510c0c55863f-cache\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: E1125 14:43:18.298846 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 14:43:18 crc kubenswrapper[4796]: E1125 14:43:18.298886 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 14:43:18 crc kubenswrapper[4796]: E1125 14:43:18.301313 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift podName:49501e2a-5ad0-4de7-9b98-510c0c55863f nodeName:}" failed. No retries permitted until 2025-11-25 14:43:18.801280927 +0000 UTC m=+1127.144390371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift") pod "swift-storage-0" (UID: "49501e2a-5ad0-4de7-9b98-510c0c55863f") : configmap "swift-ring-files" not found Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.326053 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.327042 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4458k\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-kube-api-access-4458k\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.619064 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4mgrp"] Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.623019 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.627634 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4mgrp"] Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.631638 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.631945 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.632158 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.675947 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-4mgrp"] Nov 25 14:43:18 crc kubenswrapper[4796]: E1125 14:43:18.676477 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-skf9m ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-skf9m ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-4mgrp" podUID="cb7b994b-4302-4e99-b61a-7024e686d688" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.684992 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qbvtm"] Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.686211 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.697429 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qbvtm"] Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.806846 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-dispersionconf\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.806910 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.806955 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-combined-ca-bundle\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.806981 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-combined-ca-bundle\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807020 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skf9m\" (UniqueName: \"kubernetes.io/projected/cb7b994b-4302-4e99-b61a-7024e686d688-kube-api-access-skf9m\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-swiftconf\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: E1125 14:43:18.807046 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 14:43:18 crc kubenswrapper[4796]: E1125 14:43:18.807073 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807081 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttrkx\" (UniqueName: \"kubernetes.io/projected/8a9e78aa-7f69-46de-b6a9-03f837e4f364-kube-api-access-ttrkx\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807107 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-ring-data-devices\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: E1125 14:43:18.807124 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift podName:49501e2a-5ad0-4de7-9b98-510c0c55863f nodeName:}" failed. No retries permitted until 2025-11-25 14:43:19.807105428 +0000 UTC m=+1128.150214852 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift") pod "swift-storage-0" (UID: "49501e2a-5ad0-4de7-9b98-510c0c55863f") : configmap "swift-ring-files" not found Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807156 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-scripts\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807201 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a9e78aa-7f69-46de-b6a9-03f837e4f364-etc-swift\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807229 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-scripts\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807251 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb7b994b-4302-4e99-b61a-7024e686d688-etc-swift\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807309 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-swiftconf\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807339 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-dispersionconf\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.807386 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-ring-data-devices\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-combined-ca-bundle\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908533 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-combined-ca-bundle\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908611 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skf9m\" (UniqueName: \"kubernetes.io/projected/cb7b994b-4302-4e99-b61a-7024e686d688-kube-api-access-skf9m\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-swiftconf\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908697 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttrkx\" (UniqueName: \"kubernetes.io/projected/8a9e78aa-7f69-46de-b6a9-03f837e4f364-kube-api-access-ttrkx\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908741 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-ring-data-devices\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908764 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-scripts\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908781 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a9e78aa-7f69-46de-b6a9-03f837e4f364-etc-swift\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-scripts\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908856 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb7b994b-4302-4e99-b61a-7024e686d688-etc-swift\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908878 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-swiftconf\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908913 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-dispersionconf\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908935 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-ring-data-devices\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.908955 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-dispersionconf\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.909543 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb7b994b-4302-4e99-b61a-7024e686d688-etc-swift\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.909894 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-scripts\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.910019 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a9e78aa-7f69-46de-b6a9-03f837e4f364-etc-swift\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.910261 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-ring-data-devices\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.910438 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-scripts\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.910619 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-ring-data-devices\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.913114 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-dispersionconf\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.913338 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-combined-ca-bundle\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.913763 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-swiftconf\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.916944 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-swiftconf\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.917046 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-combined-ca-bundle\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.917377 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-dispersionconf\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.925819 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttrkx\" (UniqueName: \"kubernetes.io/projected/8a9e78aa-7f69-46de-b6a9-03f837e4f364-kube-api-access-ttrkx\") pod \"swift-ring-rebalance-qbvtm\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:18 crc kubenswrapper[4796]: I1125 14:43:18.925889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skf9m\" (UniqueName: \"kubernetes.io/projected/cb7b994b-4302-4e99-b61a-7024e686d688-kube-api-access-skf9m\") pod \"swift-ring-rebalance-4mgrp\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.000683 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.100365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b5336ecd-5d7e-4b73-b2a7-d289b8578641","Type":"ContainerStarted","Data":"0568125b0078c77807f0146fb7ec9e8cd266622be481ac2f40e9ba1109d5f795"} Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.105627 4796 generic.go:334] "Generic (PLEG): container finished" podID="f8826632-6e92-47a0-80ed-2a08f466b851" containerID="eac2719df3e4b344dd4fef490411183a5501e31f022c1995ada4736faedf98be" exitCode=0 Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.105694 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" event={"ID":"f8826632-6e92-47a0-80ed-2a08f466b851","Type":"ContainerDied","Data":"eac2719df3e4b344dd4fef490411183a5501e31f022c1995ada4736faedf98be"} Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.108541 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nmsz6" event={"ID":"099992bc-6139-4064-b84d-7f9c319026d9","Type":"ContainerStarted","Data":"6daaca2e7aff3f704579e5522c48eb970abd16dcd86ec4dafd170c77dc00e0c4"} Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.108644 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.151931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.315048 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skf9m\" (UniqueName: \"kubernetes.io/projected/cb7b994b-4302-4e99-b61a-7024e686d688-kube-api-access-skf9m\") pod \"cb7b994b-4302-4e99-b61a-7024e686d688\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.315097 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb7b994b-4302-4e99-b61a-7024e686d688-etc-swift\") pod \"cb7b994b-4302-4e99-b61a-7024e686d688\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.315122 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-ring-data-devices\") pod \"cb7b994b-4302-4e99-b61a-7024e686d688\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.315186 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-swiftconf\") pod \"cb7b994b-4302-4e99-b61a-7024e686d688\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.315229 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-combined-ca-bundle\") pod \"cb7b994b-4302-4e99-b61a-7024e686d688\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.315266 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-dispersionconf\") pod \"cb7b994b-4302-4e99-b61a-7024e686d688\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.315333 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-scripts\") pod \"cb7b994b-4302-4e99-b61a-7024e686d688\" (UID: \"cb7b994b-4302-4e99-b61a-7024e686d688\") " Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.315513 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb7b994b-4302-4e99-b61a-7024e686d688-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cb7b994b-4302-4e99-b61a-7024e686d688" (UID: "cb7b994b-4302-4e99-b61a-7024e686d688"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.315804 4796 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb7b994b-4302-4e99-b61a-7024e686d688-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.316046 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-scripts" (OuterVolumeSpecName: "scripts") pod "cb7b994b-4302-4e99-b61a-7024e686d688" (UID: "cb7b994b-4302-4e99-b61a-7024e686d688"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.316344 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "cb7b994b-4302-4e99-b61a-7024e686d688" (UID: "cb7b994b-4302-4e99-b61a-7024e686d688"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.320890 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "cb7b994b-4302-4e99-b61a-7024e686d688" (UID: "cb7b994b-4302-4e99-b61a-7024e686d688"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.321772 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7b994b-4302-4e99-b61a-7024e686d688-kube-api-access-skf9m" (OuterVolumeSpecName: "kube-api-access-skf9m") pod "cb7b994b-4302-4e99-b61a-7024e686d688" (UID: "cb7b994b-4302-4e99-b61a-7024e686d688"). InnerVolumeSpecName "kube-api-access-skf9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.321968 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb7b994b-4302-4e99-b61a-7024e686d688" (UID: "cb7b994b-4302-4e99-b61a-7024e686d688"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.323023 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "cb7b994b-4302-4e99-b61a-7024e686d688" (UID: "cb7b994b-4302-4e99-b61a-7024e686d688"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.417550 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skf9m\" (UniqueName: \"kubernetes.io/projected/cb7b994b-4302-4e99-b61a-7024e686d688-kube-api-access-skf9m\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.417597 4796 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.417606 4796 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.417618 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.417628 4796 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb7b994b-4302-4e99-b61a-7024e686d688-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.417636 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb7b994b-4302-4e99-b61a-7024e686d688-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.426718 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qbvtm"] Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.515768 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.515865 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.825534 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:19 crc kubenswrapper[4796]: E1125 14:43:19.825713 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 14:43:19 crc kubenswrapper[4796]: E1125 14:43:19.825784 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 14:43:19 crc kubenswrapper[4796]: E1125 14:43:19.825829 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift podName:49501e2a-5ad0-4de7-9b98-510c0c55863f nodeName:}" failed. No retries permitted until 2025-11-25 14:43:21.825813444 +0000 UTC m=+1130.168922868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift") pod "swift-storage-0" (UID: "49501e2a-5ad0-4de7-9b98-510c0c55863f") : configmap "swift-ring-files" not found Nov 25 14:43:19 crc kubenswrapper[4796]: I1125 14:43:19.997075 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.122672 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b5336ecd-5d7e-4b73-b2a7-d289b8578641","Type":"ContainerStarted","Data":"347e55df896962d5aea9e535e1ef2de334115e262f249626bcf051721ac782e3"} Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.122802 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.126314 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" event={"ID":"f8826632-6e92-47a0-80ed-2a08f466b851","Type":"ContainerDied","Data":"18d335a29f798c411e1d5c3d6efbd676af4cb45eb5f6ea19ff2db0f1ce451969"} Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.126368 4796 scope.go:117] "RemoveContainer" containerID="eac2719df3e4b344dd4fef490411183a5501e31f022c1995ada4736faedf98be" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.126466 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-m4j6w" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.128435 4796 generic.go:334] "Generic (PLEG): container finished" podID="099992bc-6139-4064-b84d-7f9c319026d9" containerID="6daaca2e7aff3f704579e5522c48eb970abd16dcd86ec4dafd170c77dc00e0c4" exitCode=0 Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.128483 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nmsz6" event={"ID":"099992bc-6139-4064-b84d-7f9c319026d9","Type":"ContainerDied","Data":"6daaca2e7aff3f704579e5522c48eb970abd16dcd86ec4dafd170c77dc00e0c4"} Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.129482 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7sfh\" (UniqueName: \"kubernetes.io/projected/f8826632-6e92-47a0-80ed-2a08f466b851-kube-api-access-q7sfh\") pod \"f8826632-6e92-47a0-80ed-2a08f466b851\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.129590 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-dns-svc\") pod \"f8826632-6e92-47a0-80ed-2a08f466b851\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.129703 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-ovsdbserver-nb\") pod \"f8826632-6e92-47a0-80ed-2a08f466b851\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.129787 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-config\") pod \"f8826632-6e92-47a0-80ed-2a08f466b851\" (UID: \"f8826632-6e92-47a0-80ed-2a08f466b851\") " Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.136064 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8826632-6e92-47a0-80ed-2a08f466b851-kube-api-access-q7sfh" (OuterVolumeSpecName: "kube-api-access-q7sfh") pod "f8826632-6e92-47a0-80ed-2a08f466b851" (UID: "f8826632-6e92-47a0-80ed-2a08f466b851"). InnerVolumeSpecName "kube-api-access-q7sfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.138594 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4mgrp" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.139909 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qbvtm" event={"ID":"8a9e78aa-7f69-46de-b6a9-03f837e4f364","Type":"ContainerStarted","Data":"9a116e732e52d59c3809915db17734f60c83aa8b7fd663f9badc0ac927f8a6c4"} Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.157762 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.208688455 podStartE2EDuration="7.157318508s" podCreationTimestamp="2025-11-25 14:43:13 +0000 UTC" firstStartedPulling="2025-11-25 14:43:14.153896598 +0000 UTC m=+1122.497006012" lastFinishedPulling="2025-11-25 14:43:17.102526641 +0000 UTC m=+1125.445636065" observedRunningTime="2025-11-25 14:43:20.143757345 +0000 UTC m=+1128.486866789" watchObservedRunningTime="2025-11-25 14:43:20.157318508 +0000 UTC m=+1128.500427952" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.196240 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-config" (OuterVolumeSpecName: "config") pod "f8826632-6e92-47a0-80ed-2a08f466b851" (UID: "f8826632-6e92-47a0-80ed-2a08f466b851"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.207164 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8826632-6e92-47a0-80ed-2a08f466b851" (UID: "f8826632-6e92-47a0-80ed-2a08f466b851"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.231950 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.231978 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.231987 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7sfh\" (UniqueName: \"kubernetes.io/projected/f8826632-6e92-47a0-80ed-2a08f466b851-kube-api-access-q7sfh\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.234050 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8826632-6e92-47a0-80ed-2a08f466b851" (UID: "f8826632-6e92-47a0-80ed-2a08f466b851"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.309722 4796 scope.go:117] "RemoveContainer" containerID="804e7316d8bcf7219943ab3392898a5794b1d719520291f33e9b74d61c3b4689" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.333005 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8826632-6e92-47a0-80ed-2a08f466b851-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.362098 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-4mgrp"] Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.367213 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-4mgrp"] Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.422086 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7b994b-4302-4e99-b61a-7024e686d688" path="/var/lib/kubelet/pods/cb7b994b-4302-4e99-b61a-7024e686d688/volumes" Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.460947 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-m4j6w"] Nov 25 14:43:20 crc kubenswrapper[4796]: I1125 14:43:20.466709 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-m4j6w"] Nov 25 14:43:21 crc kubenswrapper[4796]: I1125 14:43:21.148802 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nmsz6" event={"ID":"099992bc-6139-4064-b84d-7f9c319026d9","Type":"ContainerStarted","Data":"7e44095bfee5b8037ffcd116956bbfd491e02e86a03231cdddebd960e025836f"} Nov 25 14:43:21 crc kubenswrapper[4796]: I1125 14:43:21.859640 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:21 crc kubenswrapper[4796]: E1125 14:43:21.859899 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 14:43:21 crc kubenswrapper[4796]: E1125 14:43:21.859918 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 14:43:21 crc kubenswrapper[4796]: E1125 14:43:21.859973 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift podName:49501e2a-5ad0-4de7-9b98-510c0c55863f nodeName:}" failed. No retries permitted until 2025-11-25 14:43:25.859956009 +0000 UTC m=+1134.203065433 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift") pod "swift-storage-0" (UID: "49501e2a-5ad0-4de7-9b98-510c0c55863f") : configmap "swift-ring-files" not found Nov 25 14:43:22 crc kubenswrapper[4796]: I1125 14:43:22.161236 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:22 crc kubenswrapper[4796]: I1125 14:43:22.197752 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-nmsz6" podStartSLOduration=6.197728692 podStartE2EDuration="6.197728692s" podCreationTimestamp="2025-11-25 14:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:43:22.184943435 +0000 UTC m=+1130.528052889" watchObservedRunningTime="2025-11-25 14:43:22.197728692 +0000 UTC m=+1130.540838146" Nov 25 14:43:22 crc kubenswrapper[4796]: I1125 14:43:22.421916 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8826632-6e92-47a0-80ed-2a08f466b851" path="/var/lib/kubelet/pods/f8826632-6e92-47a0-80ed-2a08f466b851/volumes" Nov 25 14:43:22 crc kubenswrapper[4796]: I1125 14:43:22.968853 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:23 crc kubenswrapper[4796]: I1125 14:43:23.134106 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 14:43:23 crc kubenswrapper[4796]: I1125 14:43:23.134182 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 14:43:25 crc kubenswrapper[4796]: I1125 14:43:25.936100 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:25 crc kubenswrapper[4796]: E1125 14:43:25.936300 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 14:43:25 crc kubenswrapper[4796]: E1125 14:43:25.936640 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 14:43:25 crc kubenswrapper[4796]: E1125 14:43:25.936723 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift podName:49501e2a-5ad0-4de7-9b98-510c0c55863f nodeName:}" failed. No retries permitted until 2025-11-25 14:43:33.936698161 +0000 UTC m=+1142.279807615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift") pod "swift-storage-0" (UID: "49501e2a-5ad0-4de7-9b98-510c0c55863f") : configmap "swift-ring-files" not found Nov 25 14:43:27 crc kubenswrapper[4796]: I1125 14:43:27.284763 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:43:27 crc kubenswrapper[4796]: I1125 14:43:27.342968 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-flfsz"] Nov 25 14:43:27 crc kubenswrapper[4796]: I1125 14:43:27.343705 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" podUID="e1d37852-53c0-4b9f-8089-a89a96d82753" containerName="dnsmasq-dns" containerID="cri-o://9e901cdcce41c4c4fd098fc4e161e3b082bbce473ba061595a1a3944f756716c" gracePeriod=10 Nov 25 14:43:27 crc kubenswrapper[4796]: I1125 14:43:27.970752 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" podUID="e1d37852-53c0-4b9f-8089-a89a96d82753" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Nov 25 14:43:28 crc kubenswrapper[4796]: I1125 14:43:28.213661 4796 generic.go:334] "Generic (PLEG): container finished" podID="e1d37852-53c0-4b9f-8089-a89a96d82753" containerID="9e901cdcce41c4c4fd098fc4e161e3b082bbce473ba061595a1a3944f756716c" exitCode=0 Nov 25 14:43:28 crc kubenswrapper[4796]: I1125 14:43:28.213718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" event={"ID":"e1d37852-53c0-4b9f-8089-a89a96d82753","Type":"ContainerDied","Data":"9e901cdcce41c4c4fd098fc4e161e3b082bbce473ba061595a1a3944f756716c"} Nov 25 14:43:28 crc kubenswrapper[4796]: I1125 14:43:28.423766 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 14:43:28 crc kubenswrapper[4796]: I1125 14:43:28.513401 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="25a388f4-cd5a-404d-a777-46f4410e0b3a" containerName="galera" probeResult="failure" output=< Nov 25 14:43:28 crc kubenswrapper[4796]: wsrep_local_state_comment (Joined) differs from Synced Nov 25 14:43:28 crc kubenswrapper[4796]: > Nov 25 14:43:28 crc kubenswrapper[4796]: I1125 14:43:28.649529 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 14:43:29 crc kubenswrapper[4796]: I1125 14:43:29.588487 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 14:43:29 crc kubenswrapper[4796]: I1125 14:43:29.705689 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.674661 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.739909 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-sb\") pod \"e1d37852-53c0-4b9f-8089-a89a96d82753\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.740026 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-nb\") pod \"e1d37852-53c0-4b9f-8089-a89a96d82753\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.740043 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-dns-svc\") pod \"e1d37852-53c0-4b9f-8089-a89a96d82753\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.740086 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5lmv\" (UniqueName: \"kubernetes.io/projected/e1d37852-53c0-4b9f-8089-a89a96d82753-kube-api-access-s5lmv\") pod \"e1d37852-53c0-4b9f-8089-a89a96d82753\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.740103 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-config\") pod \"e1d37852-53c0-4b9f-8089-a89a96d82753\" (UID: \"e1d37852-53c0-4b9f-8089-a89a96d82753\") " Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.744669 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d37852-53c0-4b9f-8089-a89a96d82753-kube-api-access-s5lmv" (OuterVolumeSpecName: "kube-api-access-s5lmv") pod "e1d37852-53c0-4b9f-8089-a89a96d82753" (UID: "e1d37852-53c0-4b9f-8089-a89a96d82753"). InnerVolumeSpecName "kube-api-access-s5lmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.778055 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-config" (OuterVolumeSpecName: "config") pod "e1d37852-53c0-4b9f-8089-a89a96d82753" (UID: "e1d37852-53c0-4b9f-8089-a89a96d82753"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.781436 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1d37852-53c0-4b9f-8089-a89a96d82753" (UID: "e1d37852-53c0-4b9f-8089-a89a96d82753"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.787020 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1d37852-53c0-4b9f-8089-a89a96d82753" (UID: "e1d37852-53c0-4b9f-8089-a89a96d82753"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.789093 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1d37852-53c0-4b9f-8089-a89a96d82753" (UID: "e1d37852-53c0-4b9f-8089-a89a96d82753"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.841989 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.842041 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5lmv\" (UniqueName: \"kubernetes.io/projected/e1d37852-53c0-4b9f-8089-a89a96d82753-kube-api-access-s5lmv\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.842067 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.842087 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:31 crc kubenswrapper[4796]: I1125 14:43:31.842106 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1d37852-53c0-4b9f-8089-a89a96d82753-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:32 crc kubenswrapper[4796]: I1125 14:43:32.273473 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" event={"ID":"e1d37852-53c0-4b9f-8089-a89a96d82753","Type":"ContainerDied","Data":"5beab9c5dc6d68459391d5dcfd2aeebad845bab956e0c500c9bfc2bc449565f4"} Nov 25 14:43:32 crc kubenswrapper[4796]: I1125 14:43:32.273855 4796 scope.go:117] "RemoveContainer" containerID="9e901cdcce41c4c4fd098fc4e161e3b082bbce473ba061595a1a3944f756716c" Nov 25 14:43:32 crc kubenswrapper[4796]: I1125 14:43:32.273516 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-flfsz" Nov 25 14:43:32 crc kubenswrapper[4796]: I1125 14:43:32.276171 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qbvtm" event={"ID":"8a9e78aa-7f69-46de-b6a9-03f837e4f364","Type":"ContainerStarted","Data":"6c995681bb9faddc4335ae6804a9b7d4f78934fa1fec3b3736d8c7efd0d0cbe4"} Nov 25 14:43:32 crc kubenswrapper[4796]: I1125 14:43:32.303877 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qbvtm" podStartSLOduration=2.131542728 podStartE2EDuration="14.30384804s" podCreationTimestamp="2025-11-25 14:43:18 +0000 UTC" firstStartedPulling="2025-11-25 14:43:19.49365538 +0000 UTC m=+1127.836764804" lastFinishedPulling="2025-11-25 14:43:31.665960692 +0000 UTC m=+1140.009070116" observedRunningTime="2025-11-25 14:43:32.296619889 +0000 UTC m=+1140.639729323" watchObservedRunningTime="2025-11-25 14:43:32.30384804 +0000 UTC m=+1140.646957504" Nov 25 14:43:32 crc kubenswrapper[4796]: I1125 14:43:32.330072 4796 scope.go:117] "RemoveContainer" containerID="a8ee0c095d2ca6bcc422fe8b5c8092f28a93db996c956483868ed783e2913da9" Nov 25 14:43:32 crc kubenswrapper[4796]: I1125 14:43:32.333269 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-flfsz"] Nov 25 14:43:32 crc kubenswrapper[4796]: I1125 14:43:32.341739 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-flfsz"] Nov 25 14:43:32 crc kubenswrapper[4796]: I1125 14:43:32.428327 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d37852-53c0-4b9f-8089-a89a96d82753" path="/var/lib/kubelet/pods/e1d37852-53c0-4b9f-8089-a89a96d82753/volumes" Nov 25 14:43:33 crc kubenswrapper[4796]: I1125 14:43:33.201164 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 14:43:33 crc kubenswrapper[4796]: I1125 14:43:33.975940 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:33 crc kubenswrapper[4796]: E1125 14:43:33.976108 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 14:43:33 crc kubenswrapper[4796]: E1125 14:43:33.976560 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 14:43:33 crc kubenswrapper[4796]: E1125 14:43:33.976669 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift podName:49501e2a-5ad0-4de7-9b98-510c0c55863f nodeName:}" failed. No retries permitted until 2025-11-25 14:43:49.976654735 +0000 UTC m=+1158.319764159 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift") pod "swift-storage-0" (UID: "49501e2a-5ad0-4de7-9b98-510c0c55863f") : configmap "swift-ring-files" not found Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.600735 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wvj8r"] Nov 25 14:43:34 crc kubenswrapper[4796]: E1125 14:43:34.601156 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d37852-53c0-4b9f-8089-a89a96d82753" containerName="init" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.601175 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d37852-53c0-4b9f-8089-a89a96d82753" containerName="init" Nov 25 14:43:34 crc kubenswrapper[4796]: E1125 14:43:34.601192 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8826632-6e92-47a0-80ed-2a08f466b851" containerName="init" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.601197 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8826632-6e92-47a0-80ed-2a08f466b851" containerName="init" Nov 25 14:43:34 crc kubenswrapper[4796]: E1125 14:43:34.601215 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8826632-6e92-47a0-80ed-2a08f466b851" containerName="dnsmasq-dns" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.601220 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8826632-6e92-47a0-80ed-2a08f466b851" containerName="dnsmasq-dns" Nov 25 14:43:34 crc kubenswrapper[4796]: E1125 14:43:34.601234 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d37852-53c0-4b9f-8089-a89a96d82753" containerName="dnsmasq-dns" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.601240 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d37852-53c0-4b9f-8089-a89a96d82753" containerName="dnsmasq-dns" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.601401 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d37852-53c0-4b9f-8089-a89a96d82753" containerName="dnsmasq-dns" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.601413 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8826632-6e92-47a0-80ed-2a08f466b851" containerName="dnsmasq-dns" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.601968 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.619249 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6761-account-create-t2jhq"] Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.620635 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.624802 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.629671 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wvj8r"] Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.635807 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6761-account-create-t2jhq"] Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.687013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ad353-8375-4a85-a3ed-66bc9d869e5c-operator-scripts\") pod \"keystone-6761-account-create-t2jhq\" (UID: \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\") " pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.687073 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-295ps\" (UniqueName: \"kubernetes.io/projected/23eee6d0-1c05-4c07-a956-888ec367e90a-kube-api-access-295ps\") pod \"keystone-db-create-wvj8r\" (UID: \"23eee6d0-1c05-4c07-a956-888ec367e90a\") " pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.687102 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xs2\" (UniqueName: \"kubernetes.io/projected/d08ad353-8375-4a85-a3ed-66bc9d869e5c-kube-api-access-k5xs2\") pod \"keystone-6761-account-create-t2jhq\" (UID: \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\") " pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.687179 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eee6d0-1c05-4c07-a956-888ec367e90a-operator-scripts\") pod \"keystone-db-create-wvj8r\" (UID: \"23eee6d0-1c05-4c07-a956-888ec367e90a\") " pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.788625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eee6d0-1c05-4c07-a956-888ec367e90a-operator-scripts\") pod \"keystone-db-create-wvj8r\" (UID: \"23eee6d0-1c05-4c07-a956-888ec367e90a\") " pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.788748 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ad353-8375-4a85-a3ed-66bc9d869e5c-operator-scripts\") pod \"keystone-6761-account-create-t2jhq\" (UID: \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\") " pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.788775 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-295ps\" (UniqueName: \"kubernetes.io/projected/23eee6d0-1c05-4c07-a956-888ec367e90a-kube-api-access-295ps\") pod \"keystone-db-create-wvj8r\" (UID: \"23eee6d0-1c05-4c07-a956-888ec367e90a\") " pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.788792 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5xs2\" (UniqueName: \"kubernetes.io/projected/d08ad353-8375-4a85-a3ed-66bc9d869e5c-kube-api-access-k5xs2\") pod \"keystone-6761-account-create-t2jhq\" (UID: \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\") " pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.789742 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ad353-8375-4a85-a3ed-66bc9d869e5c-operator-scripts\") pod \"keystone-6761-account-create-t2jhq\" (UID: \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\") " pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.790404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eee6d0-1c05-4c07-a956-888ec367e90a-operator-scripts\") pod \"keystone-db-create-wvj8r\" (UID: \"23eee6d0-1c05-4c07-a956-888ec367e90a\") " pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.807613 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-295ps\" (UniqueName: \"kubernetes.io/projected/23eee6d0-1c05-4c07-a956-888ec367e90a-kube-api-access-295ps\") pod \"keystone-db-create-wvj8r\" (UID: \"23eee6d0-1c05-4c07-a956-888ec367e90a\") " pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.808980 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5xs2\" (UniqueName: \"kubernetes.io/projected/d08ad353-8375-4a85-a3ed-66bc9d869e5c-kube-api-access-k5xs2\") pod \"keystone-6761-account-create-t2jhq\" (UID: \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\") " pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.874620 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-ggn22"] Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.875808 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ggn22" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.885087 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ggn22"] Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.973920 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.984226 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-11c0-account-create-9l6dt"] Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.986790 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.988929 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.989201 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.990988 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqfrr\" (UniqueName: \"kubernetes.io/projected/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-kube-api-access-dqfrr\") pod \"placement-db-create-ggn22\" (UID: \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\") " pod="openstack/placement-db-create-ggn22" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.991046 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-operator-scripts\") pod \"placement-db-create-ggn22\" (UID: \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\") " pod="openstack/placement-db-create-ggn22" Nov 25 14:43:34 crc kubenswrapper[4796]: I1125 14:43:34.995102 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-11c0-account-create-9l6dt"] Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.093210 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jq5s\" (UniqueName: \"kubernetes.io/projected/dec9638d-06b3-491b-895c-a3c306acddb5-kube-api-access-8jq5s\") pod \"placement-11c0-account-create-9l6dt\" (UID: \"dec9638d-06b3-491b-895c-a3c306acddb5\") " pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.093287 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec9638d-06b3-491b-895c-a3c306acddb5-operator-scripts\") pod \"placement-11c0-account-create-9l6dt\" (UID: \"dec9638d-06b3-491b-895c-a3c306acddb5\") " pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.093446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqfrr\" (UniqueName: \"kubernetes.io/projected/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-kube-api-access-dqfrr\") pod \"placement-db-create-ggn22\" (UID: \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\") " pod="openstack/placement-db-create-ggn22" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.093479 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-operator-scripts\") pod \"placement-db-create-ggn22\" (UID: \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\") " pod="openstack/placement-db-create-ggn22" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.094405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-operator-scripts\") pod \"placement-db-create-ggn22\" (UID: \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\") " pod="openstack/placement-db-create-ggn22" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.136612 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqfrr\" (UniqueName: \"kubernetes.io/projected/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-kube-api-access-dqfrr\") pod \"placement-db-create-ggn22\" (UID: \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\") " pod="openstack/placement-db-create-ggn22" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.194805 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jq5s\" (UniqueName: \"kubernetes.io/projected/dec9638d-06b3-491b-895c-a3c306acddb5-kube-api-access-8jq5s\") pod \"placement-11c0-account-create-9l6dt\" (UID: \"dec9638d-06b3-491b-895c-a3c306acddb5\") " pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.194857 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec9638d-06b3-491b-895c-a3c306acddb5-operator-scripts\") pod \"placement-11c0-account-create-9l6dt\" (UID: \"dec9638d-06b3-491b-895c-a3c306acddb5\") " pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.194854 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ggn22" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.195864 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec9638d-06b3-491b-895c-a3c306acddb5-operator-scripts\") pod \"placement-11c0-account-create-9l6dt\" (UID: \"dec9638d-06b3-491b-895c-a3c306acddb5\") " pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.211710 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jq5s\" (UniqueName: \"kubernetes.io/projected/dec9638d-06b3-491b-895c-a3c306acddb5-kube-api-access-8jq5s\") pod \"placement-11c0-account-create-9l6dt\" (UID: \"dec9638d-06b3-491b-895c-a3c306acddb5\") " pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.393772 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.425685 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wvj8r"] Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.493243 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6761-account-create-t2jhq"] Nov 25 14:43:35 crc kubenswrapper[4796]: W1125 14:43:35.505456 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd08ad353_8375_4a85_a3ed_66bc9d869e5c.slice/crio-2f7208fb4b0a0ebff687cc690354874e939a6963b811f06d668e7d7e8a9e5139 WatchSource:0}: Error finding container 2f7208fb4b0a0ebff687cc690354874e939a6963b811f06d668e7d7e8a9e5139: Status 404 returned error can't find the container with id 2f7208fb4b0a0ebff687cc690354874e939a6963b811f06d668e7d7e8a9e5139 Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.557174 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jftkt" podUID="9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718" containerName="ovn-controller" probeResult="failure" output=< Nov 25 14:43:35 crc kubenswrapper[4796]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 14:43:35 crc kubenswrapper[4796]: > Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.616582 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ggn22"] Nov 25 14:43:35 crc kubenswrapper[4796]: I1125 14:43:35.827142 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-11c0-account-create-9l6dt"] Nov 25 14:43:35 crc kubenswrapper[4796]: W1125 14:43:35.860232 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddec9638d_06b3_491b_895c_a3c306acddb5.slice/crio-909cdef95c1af3a2203312583045023eec288a09adce39cebd4e2aed8e34144c WatchSource:0}: Error finding container 909cdef95c1af3a2203312583045023eec288a09adce39cebd4e2aed8e34144c: Status 404 returned error can't find the container with id 909cdef95c1af3a2203312583045023eec288a09adce39cebd4e2aed8e34144c Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.313514 4796 generic.go:334] "Generic (PLEG): container finished" podID="23eee6d0-1c05-4c07-a956-888ec367e90a" containerID="73ff7fb769c5ef04d811215f7acd39f4a2df2642b045f452f9afe986677bda3f" exitCode=0 Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.313619 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wvj8r" event={"ID":"23eee6d0-1c05-4c07-a956-888ec367e90a","Type":"ContainerDied","Data":"73ff7fb769c5ef04d811215f7acd39f4a2df2642b045f452f9afe986677bda3f"} Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.313656 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wvj8r" event={"ID":"23eee6d0-1c05-4c07-a956-888ec367e90a","Type":"ContainerStarted","Data":"f79b14eca2fc5c18fc12967ddd3754ca53de489561c24af6c5b29f272abd055f"} Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.317455 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-11c0-account-create-9l6dt" event={"ID":"dec9638d-06b3-491b-895c-a3c306acddb5","Type":"ContainerStarted","Data":"3629dccbf558507c73ce3bf836e48149f30db2720b061acc8e139b98121b3323"} Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.317492 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-11c0-account-create-9l6dt" event={"ID":"dec9638d-06b3-491b-895c-a3c306acddb5","Type":"ContainerStarted","Data":"909cdef95c1af3a2203312583045023eec288a09adce39cebd4e2aed8e34144c"} Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.319876 4796 generic.go:334] "Generic (PLEG): container finished" podID="26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006" containerID="10ade1f6739c4fdacf0cb2f32a5d4429a897b4a5447c2d636bad8757aa5cc408" exitCode=0 Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.320425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ggn22" event={"ID":"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006","Type":"ContainerDied","Data":"10ade1f6739c4fdacf0cb2f32a5d4429a897b4a5447c2d636bad8757aa5cc408"} Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.320528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ggn22" event={"ID":"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006","Type":"ContainerStarted","Data":"41564dff39e0b6d979a68165d339d7cd79a77edf6490c09a0d3615d4cabffaff"} Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.321977 4796 generic.go:334] "Generic (PLEG): container finished" podID="d08ad353-8375-4a85-a3ed-66bc9d869e5c" containerID="b9832b7c4b7dd6978820e1544fc8aad479fe066db647a92c7f5b539c1f1f7e1f" exitCode=0 Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.322019 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6761-account-create-t2jhq" event={"ID":"d08ad353-8375-4a85-a3ed-66bc9d869e5c","Type":"ContainerDied","Data":"b9832b7c4b7dd6978820e1544fc8aad479fe066db647a92c7f5b539c1f1f7e1f"} Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.322038 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6761-account-create-t2jhq" event={"ID":"d08ad353-8375-4a85-a3ed-66bc9d869e5c","Type":"ContainerStarted","Data":"2f7208fb4b0a0ebff687cc690354874e939a6963b811f06d668e7d7e8a9e5139"} Nov 25 14:43:36 crc kubenswrapper[4796]: I1125 14:43:36.376127 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-11c0-account-create-9l6dt" podStartSLOduration=2.376107291 podStartE2EDuration="2.376107291s" podCreationTimestamp="2025-11-25 14:43:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:43:36.368207078 +0000 UTC m=+1144.711316522" watchObservedRunningTime="2025-11-25 14:43:36.376107291 +0000 UTC m=+1144.719216725" Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.334082 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1729cee4-39e5-4e3c-90ed-51b16a110a6a","Type":"ContainerDied","Data":"8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d"} Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.334548 4796 generic.go:334] "Generic (PLEG): container finished" podID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" containerID="8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d" exitCode=0 Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.337081 4796 generic.go:334] "Generic (PLEG): container finished" podID="dec9638d-06b3-491b-895c-a3c306acddb5" containerID="3629dccbf558507c73ce3bf836e48149f30db2720b061acc8e139b98121b3323" exitCode=0 Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.337150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-11c0-account-create-9l6dt" event={"ID":"dec9638d-06b3-491b-895c-a3c306acddb5","Type":"ContainerDied","Data":"3629dccbf558507c73ce3bf836e48149f30db2720b061acc8e139b98121b3323"} Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.789635 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.851416 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.899526 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ggn22" Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.951389 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ad353-8375-4a85-a3ed-66bc9d869e5c-operator-scripts\") pod \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\" (UID: \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\") " Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.951442 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eee6d0-1c05-4c07-a956-888ec367e90a-operator-scripts\") pod \"23eee6d0-1c05-4c07-a956-888ec367e90a\" (UID: \"23eee6d0-1c05-4c07-a956-888ec367e90a\") " Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.951667 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-295ps\" (UniqueName: \"kubernetes.io/projected/23eee6d0-1c05-4c07-a956-888ec367e90a-kube-api-access-295ps\") pod \"23eee6d0-1c05-4c07-a956-888ec367e90a\" (UID: \"23eee6d0-1c05-4c07-a956-888ec367e90a\") " Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.951729 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5xs2\" (UniqueName: \"kubernetes.io/projected/d08ad353-8375-4a85-a3ed-66bc9d869e5c-kube-api-access-k5xs2\") pod \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\" (UID: \"d08ad353-8375-4a85-a3ed-66bc9d869e5c\") " Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.952113 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eee6d0-1c05-4c07-a956-888ec367e90a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23eee6d0-1c05-4c07-a956-888ec367e90a" (UID: "23eee6d0-1c05-4c07-a956-888ec367e90a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.952125 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d08ad353-8375-4a85-a3ed-66bc9d869e5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d08ad353-8375-4a85-a3ed-66bc9d869e5c" (UID: "d08ad353-8375-4a85-a3ed-66bc9d869e5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.957065 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eee6d0-1c05-4c07-a956-888ec367e90a-kube-api-access-295ps" (OuterVolumeSpecName: "kube-api-access-295ps") pod "23eee6d0-1c05-4c07-a956-888ec367e90a" (UID: "23eee6d0-1c05-4c07-a956-888ec367e90a"). InnerVolumeSpecName "kube-api-access-295ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:37 crc kubenswrapper[4796]: I1125 14:43:37.957219 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08ad353-8375-4a85-a3ed-66bc9d869e5c-kube-api-access-k5xs2" (OuterVolumeSpecName: "kube-api-access-k5xs2") pod "d08ad353-8375-4a85-a3ed-66bc9d869e5c" (UID: "d08ad353-8375-4a85-a3ed-66bc9d869e5c"). InnerVolumeSpecName "kube-api-access-k5xs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.052970 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-operator-scripts\") pod \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\" (UID: \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\") " Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.053133 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqfrr\" (UniqueName: \"kubernetes.io/projected/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-kube-api-access-dqfrr\") pod \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\" (UID: \"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006\") " Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.053517 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-295ps\" (UniqueName: \"kubernetes.io/projected/23eee6d0-1c05-4c07-a956-888ec367e90a-kube-api-access-295ps\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.053538 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5xs2\" (UniqueName: \"kubernetes.io/projected/d08ad353-8375-4a85-a3ed-66bc9d869e5c-kube-api-access-k5xs2\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.053548 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d08ad353-8375-4a85-a3ed-66bc9d869e5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.053558 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23eee6d0-1c05-4c07-a956-888ec367e90a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.053653 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006" (UID: "26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.055701 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-kube-api-access-dqfrr" (OuterVolumeSpecName: "kube-api-access-dqfrr") pod "26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006" (UID: "26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006"). InnerVolumeSpecName "kube-api-access-dqfrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.155496 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqfrr\" (UniqueName: \"kubernetes.io/projected/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-kube-api-access-dqfrr\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.155600 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.353184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1729cee4-39e5-4e3c-90ed-51b16a110a6a","Type":"ContainerStarted","Data":"12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a"} Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.353456 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.358960 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ggn22" event={"ID":"26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006","Type":"ContainerDied","Data":"41564dff39e0b6d979a68165d339d7cd79a77edf6490c09a0d3615d4cabffaff"} Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.358983 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ggn22" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.359002 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41564dff39e0b6d979a68165d339d7cd79a77edf6490c09a0d3615d4cabffaff" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.360757 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6761-account-create-t2jhq" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.360770 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6761-account-create-t2jhq" event={"ID":"d08ad353-8375-4a85-a3ed-66bc9d869e5c","Type":"ContainerDied","Data":"2f7208fb4b0a0ebff687cc690354874e939a6963b811f06d668e7d7e8a9e5139"} Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.360797 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f7208fb4b0a0ebff687cc690354874e939a6963b811f06d668e7d7e8a9e5139" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.362661 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wvj8r" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.362648 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wvj8r" event={"ID":"23eee6d0-1c05-4c07-a956-888ec367e90a","Type":"ContainerDied","Data":"f79b14eca2fc5c18fc12967ddd3754ca53de489561c24af6c5b29f272abd055f"} Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.362708 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79b14eca2fc5c18fc12967ddd3754ca53de489561c24af6c5b29f272abd055f" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.386782 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.049321228 podStartE2EDuration="58.386758502s" podCreationTimestamp="2025-11-25 14:42:40 +0000 UTC" firstStartedPulling="2025-11-25 14:42:53.961282797 +0000 UTC m=+1102.304392221" lastFinishedPulling="2025-11-25 14:42:59.298720071 +0000 UTC m=+1107.641829495" observedRunningTime="2025-11-25 14:43:38.382021415 +0000 UTC m=+1146.725130849" watchObservedRunningTime="2025-11-25 14:43:38.386758502 +0000 UTC m=+1146.729867926" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.662393 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.765088 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jq5s\" (UniqueName: \"kubernetes.io/projected/dec9638d-06b3-491b-895c-a3c306acddb5-kube-api-access-8jq5s\") pod \"dec9638d-06b3-491b-895c-a3c306acddb5\" (UID: \"dec9638d-06b3-491b-895c-a3c306acddb5\") " Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.765278 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec9638d-06b3-491b-895c-a3c306acddb5-operator-scripts\") pod \"dec9638d-06b3-491b-895c-a3c306acddb5\" (UID: \"dec9638d-06b3-491b-895c-a3c306acddb5\") " Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.765623 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec9638d-06b3-491b-895c-a3c306acddb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dec9638d-06b3-491b-895c-a3c306acddb5" (UID: "dec9638d-06b3-491b-895c-a3c306acddb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.766325 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec9638d-06b3-491b-895c-a3c306acddb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.771042 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec9638d-06b3-491b-895c-a3c306acddb5-kube-api-access-8jq5s" (OuterVolumeSpecName: "kube-api-access-8jq5s") pod "dec9638d-06b3-491b-895c-a3c306acddb5" (UID: "dec9638d-06b3-491b-895c-a3c306acddb5"). InnerVolumeSpecName "kube-api-access-8jq5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:38 crc kubenswrapper[4796]: I1125 14:43:38.868120 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jq5s\" (UniqueName: \"kubernetes.io/projected/dec9638d-06b3-491b-895c-a3c306acddb5-kube-api-access-8jq5s\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:39 crc kubenswrapper[4796]: I1125 14:43:39.385276 4796 generic.go:334] "Generic (PLEG): container finished" podID="8a9e78aa-7f69-46de-b6a9-03f837e4f364" containerID="6c995681bb9faddc4335ae6804a9b7d4f78934fa1fec3b3736d8c7efd0d0cbe4" exitCode=0 Nov 25 14:43:39 crc kubenswrapper[4796]: I1125 14:43:39.385564 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qbvtm" event={"ID":"8a9e78aa-7f69-46de-b6a9-03f837e4f364","Type":"ContainerDied","Data":"6c995681bb9faddc4335ae6804a9b7d4f78934fa1fec3b3736d8c7efd0d0cbe4"} Nov 25 14:43:39 crc kubenswrapper[4796]: I1125 14:43:39.388188 4796 generic.go:334] "Generic (PLEG): container finished" podID="df357d5a-93ca-48cc-bcec-b01ba247136e" containerID="2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c" exitCode=0 Nov 25 14:43:39 crc kubenswrapper[4796]: I1125 14:43:39.388274 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df357d5a-93ca-48cc-bcec-b01ba247136e","Type":"ContainerDied","Data":"2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c"} Nov 25 14:43:39 crc kubenswrapper[4796]: I1125 14:43:39.390207 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-11c0-account-create-9l6dt" Nov 25 14:43:39 crc kubenswrapper[4796]: I1125 14:43:39.390257 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-11c0-account-create-9l6dt" event={"ID":"dec9638d-06b3-491b-895c-a3c306acddb5","Type":"ContainerDied","Data":"909cdef95c1af3a2203312583045023eec288a09adce39cebd4e2aed8e34144c"} Nov 25 14:43:39 crc kubenswrapper[4796]: I1125 14:43:39.390278 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="909cdef95c1af3a2203312583045023eec288a09adce39cebd4e2aed8e34144c" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.043753 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4lg86"] Nov 25 14:43:40 crc kubenswrapper[4796]: E1125 14:43:40.044117 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08ad353-8375-4a85-a3ed-66bc9d869e5c" containerName="mariadb-account-create" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.044128 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08ad353-8375-4a85-a3ed-66bc9d869e5c" containerName="mariadb-account-create" Nov 25 14:43:40 crc kubenswrapper[4796]: E1125 14:43:40.044142 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec9638d-06b3-491b-895c-a3c306acddb5" containerName="mariadb-account-create" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.044148 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec9638d-06b3-491b-895c-a3c306acddb5" containerName="mariadb-account-create" Nov 25 14:43:40 crc kubenswrapper[4796]: E1125 14:43:40.044157 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006" containerName="mariadb-database-create" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.044163 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006" containerName="mariadb-database-create" Nov 25 14:43:40 crc kubenswrapper[4796]: E1125 14:43:40.044176 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eee6d0-1c05-4c07-a956-888ec367e90a" containerName="mariadb-database-create" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.044182 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eee6d0-1c05-4c07-a956-888ec367e90a" containerName="mariadb-database-create" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.044338 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006" containerName="mariadb-database-create" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.044350 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08ad353-8375-4a85-a3ed-66bc9d869e5c" containerName="mariadb-account-create" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.044365 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec9638d-06b3-491b-895c-a3c306acddb5" containerName="mariadb-account-create" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.044374 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eee6d0-1c05-4c07-a956-888ec367e90a" containerName="mariadb-database-create" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.044947 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4lg86" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.054880 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4lg86"] Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.141216 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-5bd8-account-create-m8848"] Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.142287 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.144119 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.149536 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5bd8-account-create-m8848"] Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.193319 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-operator-scripts\") pod \"glance-db-create-4lg86\" (UID: \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\") " pod="openstack/glance-db-create-4lg86" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.193663 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdjw\" (UniqueName: \"kubernetes.io/projected/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-kube-api-access-hwdjw\") pod \"glance-db-create-4lg86\" (UID: \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\") " pod="openstack/glance-db-create-4lg86" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.294909 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14330b10-0b24-42a5-a682-cbc7cdb4a546-operator-scripts\") pod \"glance-5bd8-account-create-m8848\" (UID: \"14330b10-0b24-42a5-a682-cbc7cdb4a546\") " pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.294969 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4b9r\" (UniqueName: \"kubernetes.io/projected/14330b10-0b24-42a5-a682-cbc7cdb4a546-kube-api-access-q4b9r\") pod \"glance-5bd8-account-create-m8848\" (UID: \"14330b10-0b24-42a5-a682-cbc7cdb4a546\") " pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.295042 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-operator-scripts\") pod \"glance-db-create-4lg86\" (UID: \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\") " pod="openstack/glance-db-create-4lg86" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.295336 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdjw\" (UniqueName: \"kubernetes.io/projected/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-kube-api-access-hwdjw\") pod \"glance-db-create-4lg86\" (UID: \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\") " pod="openstack/glance-db-create-4lg86" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.295738 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-operator-scripts\") pod \"glance-db-create-4lg86\" (UID: \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\") " pod="openstack/glance-db-create-4lg86" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.314707 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdjw\" (UniqueName: \"kubernetes.io/projected/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-kube-api-access-hwdjw\") pod \"glance-db-create-4lg86\" (UID: \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\") " pod="openstack/glance-db-create-4lg86" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.361745 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4lg86" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.396627 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14330b10-0b24-42a5-a682-cbc7cdb4a546-operator-scripts\") pod \"glance-5bd8-account-create-m8848\" (UID: \"14330b10-0b24-42a5-a682-cbc7cdb4a546\") " pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.396705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4b9r\" (UniqueName: \"kubernetes.io/projected/14330b10-0b24-42a5-a682-cbc7cdb4a546-kube-api-access-q4b9r\") pod \"glance-5bd8-account-create-m8848\" (UID: \"14330b10-0b24-42a5-a682-cbc7cdb4a546\") " pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.399586 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14330b10-0b24-42a5-a682-cbc7cdb4a546-operator-scripts\") pod \"glance-5bd8-account-create-m8848\" (UID: \"14330b10-0b24-42a5-a682-cbc7cdb4a546\") " pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.423395 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4b9r\" (UniqueName: \"kubernetes.io/projected/14330b10-0b24-42a5-a682-cbc7cdb4a546-kube-api-access-q4b9r\") pod \"glance-5bd8-account-create-m8848\" (UID: \"14330b10-0b24-42a5-a682-cbc7cdb4a546\") " pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.429706 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df357d5a-93ca-48cc-bcec-b01ba247136e","Type":"ContainerStarted","Data":"1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b"} Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.430176 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.456690 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.457768 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.954900306 podStartE2EDuration="1m1.457744386s" podCreationTimestamp="2025-11-25 14:42:39 +0000 UTC" firstStartedPulling="2025-11-25 14:42:55.857742696 +0000 UTC m=+1104.200852120" lastFinishedPulling="2025-11-25 14:43:03.360586776 +0000 UTC m=+1111.703696200" observedRunningTime="2025-11-25 14:43:40.451382835 +0000 UTC m=+1148.794492279" watchObservedRunningTime="2025-11-25 14:43:40.457744386 +0000 UTC m=+1148.800853810" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.563259 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jftkt" podUID="9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718" containerName="ovn-controller" probeResult="failure" output=< Nov 25 14:43:40 crc kubenswrapper[4796]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 14:43:40 crc kubenswrapper[4796]: > Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.576051 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.578188 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bcptz" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.798584 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.847537 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jftkt-config-wndvl"] Nov 25 14:43:40 crc kubenswrapper[4796]: E1125 14:43:40.847862 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9e78aa-7f69-46de-b6a9-03f837e4f364" containerName="swift-ring-rebalance" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.847875 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9e78aa-7f69-46de-b6a9-03f837e4f364" containerName="swift-ring-rebalance" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.848061 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a9e78aa-7f69-46de-b6a9-03f837e4f364" containerName="swift-ring-rebalance" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.848661 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.852439 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.858752 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jftkt-config-wndvl"] Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.921396 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-ring-data-devices\") pod \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.921633 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttrkx\" (UniqueName: \"kubernetes.io/projected/8a9e78aa-7f69-46de-b6a9-03f837e4f364-kube-api-access-ttrkx\") pod \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.921706 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-scripts\") pod \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.921763 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a9e78aa-7f69-46de-b6a9-03f837e4f364-etc-swift\") pod \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.921919 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-dispersionconf\") pod \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.921963 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-combined-ca-bundle\") pod \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.921998 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-swiftconf\") pod \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\" (UID: \"8a9e78aa-7f69-46de-b6a9-03f837e4f364\") " Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.923978 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8a9e78aa-7f69-46de-b6a9-03f837e4f364" (UID: "8a9e78aa-7f69-46de-b6a9-03f837e4f364"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.924199 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a9e78aa-7f69-46de-b6a9-03f837e4f364-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8a9e78aa-7f69-46de-b6a9-03f837e4f364" (UID: "8a9e78aa-7f69-46de-b6a9-03f837e4f364"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.930929 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9e78aa-7f69-46de-b6a9-03f837e4f364-kube-api-access-ttrkx" (OuterVolumeSpecName: "kube-api-access-ttrkx") pod "8a9e78aa-7f69-46de-b6a9-03f837e4f364" (UID: "8a9e78aa-7f69-46de-b6a9-03f837e4f364"). InnerVolumeSpecName "kube-api-access-ttrkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.933724 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8a9e78aa-7f69-46de-b6a9-03f837e4f364" (UID: "8a9e78aa-7f69-46de-b6a9-03f837e4f364"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.947319 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-scripts" (OuterVolumeSpecName: "scripts") pod "8a9e78aa-7f69-46de-b6a9-03f837e4f364" (UID: "8a9e78aa-7f69-46de-b6a9-03f837e4f364"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.952043 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8a9e78aa-7f69-46de-b6a9-03f837e4f364" (UID: "8a9e78aa-7f69-46de-b6a9-03f837e4f364"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.953181 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a9e78aa-7f69-46de-b6a9-03f837e4f364" (UID: "8a9e78aa-7f69-46de-b6a9-03f837e4f364"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:43:40 crc kubenswrapper[4796]: I1125 14:43:40.980701 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-5bd8-account-create-m8848"] Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.024917 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-scripts\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025001 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-additional-scripts\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025051 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjzxf\" (UniqueName: \"kubernetes.io/projected/5ad5f0df-6e46-400e-8e99-435d224e0b02-kube-api-access-cjzxf\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025117 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run-ovn\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025159 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-log-ovn\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025184 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025273 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025287 4796 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025300 4796 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025309 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttrkx\" (UniqueName: \"kubernetes.io/projected/8a9e78aa-7f69-46de-b6a9-03f837e4f364-kube-api-access-ttrkx\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025324 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a9e78aa-7f69-46de-b6a9-03f837e4f364-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025333 4796 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8a9e78aa-7f69-46de-b6a9-03f837e4f364-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.025343 4796 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8a9e78aa-7f69-46de-b6a9-03f837e4f364-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.048651 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4lg86"] Nov 25 14:43:41 crc kubenswrapper[4796]: W1125 14:43:41.069130 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef276b07_ccd1_4f2d_ab5f_b7208745b3e8.slice/crio-663db62ffebe993cf5296fb94eebba366da63b9a1d8e5689f5a7df3efb886b9d WatchSource:0}: Error finding container 663db62ffebe993cf5296fb94eebba366da63b9a1d8e5689f5a7df3efb886b9d: Status 404 returned error can't find the container with id 663db62ffebe993cf5296fb94eebba366da63b9a1d8e5689f5a7df3efb886b9d Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.126406 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run-ovn\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.126484 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-log-ovn\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.126509 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.126637 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-scripts\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.126673 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-additional-scripts\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.126706 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjzxf\" (UniqueName: \"kubernetes.io/projected/5ad5f0df-6e46-400e-8e99-435d224e0b02-kube-api-access-cjzxf\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.127326 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run-ovn\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.127377 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.127423 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-log-ovn\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.128324 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-additional-scripts\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.129358 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-scripts\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.154533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjzxf\" (UniqueName: \"kubernetes.io/projected/5ad5f0df-6e46-400e-8e99-435d224e0b02-kube-api-access-cjzxf\") pod \"ovn-controller-jftkt-config-wndvl\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.166190 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.428965 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qbvtm" event={"ID":"8a9e78aa-7f69-46de-b6a9-03f837e4f364","Type":"ContainerDied","Data":"9a116e732e52d59c3809915db17734f60c83aa8b7fd663f9badc0ac927f8a6c4"} Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.429020 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a116e732e52d59c3809915db17734f60c83aa8b7fd663f9badc0ac927f8a6c4" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.429099 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qbvtm" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.445518 4796 generic.go:334] "Generic (PLEG): container finished" podID="14330b10-0b24-42a5-a682-cbc7cdb4a546" containerID="d0b4a45686e2d526926edae22b46ee125ee8cb7172e2d45de5986dca16ba07ce" exitCode=0 Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.445638 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5bd8-account-create-m8848" event={"ID":"14330b10-0b24-42a5-a682-cbc7cdb4a546","Type":"ContainerDied","Data":"d0b4a45686e2d526926edae22b46ee125ee8cb7172e2d45de5986dca16ba07ce"} Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.445673 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5bd8-account-create-m8848" event={"ID":"14330b10-0b24-42a5-a682-cbc7cdb4a546","Type":"ContainerStarted","Data":"a36ce94535a040f1d85685258e2d5be312295e4973e92cd4fa1ec93a951ae9fc"} Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.449028 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4lg86" event={"ID":"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8","Type":"ContainerStarted","Data":"ab89f5c16b7e93a7afc5b1a6221b5fbdb4328e8c12d2dd1c453fe6dabd1aeaee"} Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.449050 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4lg86" event={"ID":"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8","Type":"ContainerStarted","Data":"663db62ffebe993cf5296fb94eebba366da63b9a1d8e5689f5a7df3efb886b9d"} Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.510333 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-4lg86" podStartSLOduration=1.510300593 podStartE2EDuration="1.510300593s" podCreationTimestamp="2025-11-25 14:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:43:41.487189372 +0000 UTC m=+1149.830298816" watchObservedRunningTime="2025-11-25 14:43:41.510300593 +0000 UTC m=+1149.853410017" Nov 25 14:43:41 crc kubenswrapper[4796]: I1125 14:43:41.606074 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jftkt-config-wndvl"] Nov 25 14:43:42 crc kubenswrapper[4796]: I1125 14:43:42.456357 4796 generic.go:334] "Generic (PLEG): container finished" podID="5ad5f0df-6e46-400e-8e99-435d224e0b02" containerID="e3ab61e96b4b14dde9a37c7a8b99aab5a30ec927bb79c80dfdcc65657de84834" exitCode=0 Nov 25 14:43:42 crc kubenswrapper[4796]: I1125 14:43:42.456433 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jftkt-config-wndvl" event={"ID":"5ad5f0df-6e46-400e-8e99-435d224e0b02","Type":"ContainerDied","Data":"e3ab61e96b4b14dde9a37c7a8b99aab5a30ec927bb79c80dfdcc65657de84834"} Nov 25 14:43:42 crc kubenswrapper[4796]: I1125 14:43:42.456691 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jftkt-config-wndvl" event={"ID":"5ad5f0df-6e46-400e-8e99-435d224e0b02","Type":"ContainerStarted","Data":"4bbbe9181a3d1b0412690bfbba6a14a76812d1f3cc9041ccf1115ae0bbfa8665"} Nov 25 14:43:42 crc kubenswrapper[4796]: I1125 14:43:42.458653 4796 generic.go:334] "Generic (PLEG): container finished" podID="ef276b07-ccd1-4f2d-ab5f-b7208745b3e8" containerID="ab89f5c16b7e93a7afc5b1a6221b5fbdb4328e8c12d2dd1c453fe6dabd1aeaee" exitCode=0 Nov 25 14:43:42 crc kubenswrapper[4796]: I1125 14:43:42.458689 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4lg86" event={"ID":"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8","Type":"ContainerDied","Data":"ab89f5c16b7e93a7afc5b1a6221b5fbdb4328e8c12d2dd1c453fe6dabd1aeaee"} Nov 25 14:43:42 crc kubenswrapper[4796]: I1125 14:43:42.797000 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:42 crc kubenswrapper[4796]: I1125 14:43:42.970206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4b9r\" (UniqueName: \"kubernetes.io/projected/14330b10-0b24-42a5-a682-cbc7cdb4a546-kube-api-access-q4b9r\") pod \"14330b10-0b24-42a5-a682-cbc7cdb4a546\" (UID: \"14330b10-0b24-42a5-a682-cbc7cdb4a546\") " Nov 25 14:43:42 crc kubenswrapper[4796]: I1125 14:43:42.970342 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14330b10-0b24-42a5-a682-cbc7cdb4a546-operator-scripts\") pod \"14330b10-0b24-42a5-a682-cbc7cdb4a546\" (UID: \"14330b10-0b24-42a5-a682-cbc7cdb4a546\") " Nov 25 14:43:42 crc kubenswrapper[4796]: I1125 14:43:42.971338 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14330b10-0b24-42a5-a682-cbc7cdb4a546-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14330b10-0b24-42a5-a682-cbc7cdb4a546" (UID: "14330b10-0b24-42a5-a682-cbc7cdb4a546"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.000553 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14330b10-0b24-42a5-a682-cbc7cdb4a546-kube-api-access-q4b9r" (OuterVolumeSpecName: "kube-api-access-q4b9r") pod "14330b10-0b24-42a5-a682-cbc7cdb4a546" (UID: "14330b10-0b24-42a5-a682-cbc7cdb4a546"). InnerVolumeSpecName "kube-api-access-q4b9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.072166 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14330b10-0b24-42a5-a682-cbc7cdb4a546-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.072403 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4b9r\" (UniqueName: \"kubernetes.io/projected/14330b10-0b24-42a5-a682-cbc7cdb4a546-kube-api-access-q4b9r\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.470483 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-5bd8-account-create-m8848" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.470470 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-5bd8-account-create-m8848" event={"ID":"14330b10-0b24-42a5-a682-cbc7cdb4a546","Type":"ContainerDied","Data":"a36ce94535a040f1d85685258e2d5be312295e4973e92cd4fa1ec93a951ae9fc"} Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.471873 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a36ce94535a040f1d85685258e2d5be312295e4973e92cd4fa1ec93a951ae9fc" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.783682 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4lg86" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.886777 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.888556 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-operator-scripts\") pod \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\" (UID: \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\") " Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.888690 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwdjw\" (UniqueName: \"kubernetes.io/projected/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-kube-api-access-hwdjw\") pod \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\" (UID: \"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8\") " Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.889111 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef276b07-ccd1-4f2d-ab5f-b7208745b3e8" (UID: "ef276b07-ccd1-4f2d-ab5f-b7208745b3e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.894780 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-kube-api-access-hwdjw" (OuterVolumeSpecName: "kube-api-access-hwdjw") pod "ef276b07-ccd1-4f2d-ab5f-b7208745b3e8" (UID: "ef276b07-ccd1-4f2d-ab5f-b7208745b3e8"). InnerVolumeSpecName "kube-api-access-hwdjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.990159 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run-ovn\") pod \"5ad5f0df-6e46-400e-8e99-435d224e0b02\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.990225 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-additional-scripts\") pod \"5ad5f0df-6e46-400e-8e99-435d224e0b02\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.990259 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5ad5f0df-6e46-400e-8e99-435d224e0b02" (UID: "5ad5f0df-6e46-400e-8e99-435d224e0b02"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.990353 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjzxf\" (UniqueName: \"kubernetes.io/projected/5ad5f0df-6e46-400e-8e99-435d224e0b02-kube-api-access-cjzxf\") pod \"5ad5f0df-6e46-400e-8e99-435d224e0b02\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.990414 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-log-ovn\") pod \"5ad5f0df-6e46-400e-8e99-435d224e0b02\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.990452 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-scripts\") pod \"5ad5f0df-6e46-400e-8e99-435d224e0b02\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.990522 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run\") pod \"5ad5f0df-6e46-400e-8e99-435d224e0b02\" (UID: \"5ad5f0df-6e46-400e-8e99-435d224e0b02\") " Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.990520 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5ad5f0df-6e46-400e-8e99-435d224e0b02" (UID: "5ad5f0df-6e46-400e-8e99-435d224e0b02"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.990693 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run" (OuterVolumeSpecName: "var-run") pod "5ad5f0df-6e46-400e-8e99-435d224e0b02" (UID: "5ad5f0df-6e46-400e-8e99-435d224e0b02"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.991042 4796 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.991084 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwdjw\" (UniqueName: \"kubernetes.io/projected/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-kube-api-access-hwdjw\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.991105 4796 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.991123 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.991140 4796 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad5f0df-6e46-400e-8e99-435d224e0b02-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.991044 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5ad5f0df-6e46-400e-8e99-435d224e0b02" (UID: "5ad5f0df-6e46-400e-8e99-435d224e0b02"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.991329 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-scripts" (OuterVolumeSpecName: "scripts") pod "5ad5f0df-6e46-400e-8e99-435d224e0b02" (UID: "5ad5f0df-6e46-400e-8e99-435d224e0b02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:43:43 crc kubenswrapper[4796]: I1125 14:43:43.993909 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad5f0df-6e46-400e-8e99-435d224e0b02-kube-api-access-cjzxf" (OuterVolumeSpecName: "kube-api-access-cjzxf") pod "5ad5f0df-6e46-400e-8e99-435d224e0b02" (UID: "5ad5f0df-6e46-400e-8e99-435d224e0b02"). InnerVolumeSpecName "kube-api-access-cjzxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.092595 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjzxf\" (UniqueName: \"kubernetes.io/projected/5ad5f0df-6e46-400e-8e99-435d224e0b02-kube-api-access-cjzxf\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.092652 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.092670 4796 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad5f0df-6e46-400e-8e99-435d224e0b02-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.479505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jftkt-config-wndvl" event={"ID":"5ad5f0df-6e46-400e-8e99-435d224e0b02","Type":"ContainerDied","Data":"4bbbe9181a3d1b0412690bfbba6a14a76812d1f3cc9041ccf1115ae0bbfa8665"} Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.479560 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bbbe9181a3d1b0412690bfbba6a14a76812d1f3cc9041ccf1115ae0bbfa8665" Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.479659 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jftkt-config-wndvl" Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.481813 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4lg86" event={"ID":"ef276b07-ccd1-4f2d-ab5f-b7208745b3e8","Type":"ContainerDied","Data":"663db62ffebe993cf5296fb94eebba366da63b9a1d8e5689f5a7df3efb886b9d"} Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.481838 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="663db62ffebe993cf5296fb94eebba366da63b9a1d8e5689f5a7df3efb886b9d" Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.481878 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4lg86" Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.979990 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jftkt-config-wndvl"] Nov 25 14:43:44 crc kubenswrapper[4796]: I1125 14:43:44.991427 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jftkt-config-wndvl"] Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.373432 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kddwd"] Nov 25 14:43:45 crc kubenswrapper[4796]: E1125 14:43:45.373841 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad5f0df-6e46-400e-8e99-435d224e0b02" containerName="ovn-config" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.373869 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad5f0df-6e46-400e-8e99-435d224e0b02" containerName="ovn-config" Nov 25 14:43:45 crc kubenswrapper[4796]: E1125 14:43:45.373889 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef276b07-ccd1-4f2d-ab5f-b7208745b3e8" containerName="mariadb-database-create" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.373898 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef276b07-ccd1-4f2d-ab5f-b7208745b3e8" containerName="mariadb-database-create" Nov 25 14:43:45 crc kubenswrapper[4796]: E1125 14:43:45.373918 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14330b10-0b24-42a5-a682-cbc7cdb4a546" containerName="mariadb-account-create" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.373927 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="14330b10-0b24-42a5-a682-cbc7cdb4a546" containerName="mariadb-account-create" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.374086 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef276b07-ccd1-4f2d-ab5f-b7208745b3e8" containerName="mariadb-database-create" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.374104 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad5f0df-6e46-400e-8e99-435d224e0b02" containerName="ovn-config" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.374122 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="14330b10-0b24-42a5-a682-cbc7cdb4a546" containerName="mariadb-account-create" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.374721 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.376737 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dxx7v" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.377710 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.387368 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kddwd"] Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.518303 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-db-sync-config-data\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.518832 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-config-data\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.518860 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-combined-ca-bundle\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.518883 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvfvq\" (UniqueName: \"kubernetes.io/projected/d3947d76-dff0-44d7-9b86-d2a0406db500-kube-api-access-dvfvq\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.533929 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jftkt" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.620849 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-config-data\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.620918 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-combined-ca-bundle\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.620955 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvfvq\" (UniqueName: \"kubernetes.io/projected/d3947d76-dff0-44d7-9b86-d2a0406db500-kube-api-access-dvfvq\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.621037 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-db-sync-config-data\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.627389 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-config-data\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.627415 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-combined-ca-bundle\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.627390 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-db-sync-config-data\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.639158 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvfvq\" (UniqueName: \"kubernetes.io/projected/d3947d76-dff0-44d7-9b86-d2a0406db500-kube-api-access-dvfvq\") pod \"glance-db-sync-kddwd\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:45 crc kubenswrapper[4796]: I1125 14:43:45.691230 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kddwd" Nov 25 14:43:46 crc kubenswrapper[4796]: I1125 14:43:46.420146 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad5f0df-6e46-400e-8e99-435d224e0b02" path="/var/lib/kubelet/pods/5ad5f0df-6e46-400e-8e99-435d224e0b02/volumes" Nov 25 14:43:46 crc kubenswrapper[4796]: I1125 14:43:46.538616 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kddwd"] Nov 25 14:43:46 crc kubenswrapper[4796]: W1125 14:43:46.552726 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3947d76_dff0_44d7_9b86_d2a0406db500.slice/crio-f50ee0fbaba9e9b0de548d9dce9f34b0bbe1af7e0b066be0cd81d2be23357b85 WatchSource:0}: Error finding container f50ee0fbaba9e9b0de548d9dce9f34b0bbe1af7e0b066be0cd81d2be23357b85: Status 404 returned error can't find the container with id f50ee0fbaba9e9b0de548d9dce9f34b0bbe1af7e0b066be0cd81d2be23357b85 Nov 25 14:43:47 crc kubenswrapper[4796]: I1125 14:43:47.509717 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kddwd" event={"ID":"d3947d76-dff0-44d7-9b86-d2a0406db500","Type":"ContainerStarted","Data":"f50ee0fbaba9e9b0de548d9dce9f34b0bbe1af7e0b066be0cd81d2be23357b85"} Nov 25 14:43:49 crc kubenswrapper[4796]: I1125 14:43:49.514477 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:43:49 crc kubenswrapper[4796]: I1125 14:43:49.514884 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:43:49 crc kubenswrapper[4796]: I1125 14:43:49.994804 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:50 crc kubenswrapper[4796]: I1125 14:43:50.001885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49501e2a-5ad0-4de7-9b98-510c0c55863f-etc-swift\") pod \"swift-storage-0\" (UID: \"49501e2a-5ad0-4de7-9b98-510c0c55863f\") " pod="openstack/swift-storage-0" Nov 25 14:43:50 crc kubenswrapper[4796]: I1125 14:43:50.264730 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 14:43:50 crc kubenswrapper[4796]: I1125 14:43:50.754417 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.384786 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.687996 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.844342 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-g8hc4"] Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.845608 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g8hc4" Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.865350 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-g8hc4"] Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.924989 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e2365b-5b83-408e-8b40-59c35b6fcd90-operator-scripts\") pod \"barbican-db-create-g8hc4\" (UID: \"67e2365b-5b83-408e-8b40-59c35b6fcd90\") " pod="openstack/barbican-db-create-g8hc4" Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.925050 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhbcc\" (UniqueName: \"kubernetes.io/projected/67e2365b-5b83-408e-8b40-59c35b6fcd90-kube-api-access-nhbcc\") pod \"barbican-db-create-g8hc4\" (UID: \"67e2365b-5b83-408e-8b40-59c35b6fcd90\") " pod="openstack/barbican-db-create-g8hc4" Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.991710 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3c99-account-create-vmvq6"] Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.992941 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:43:51 crc kubenswrapper[4796]: I1125 14:43:51.997877 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.010253 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3c99-account-create-vmvq6"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.026324 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e2365b-5b83-408e-8b40-59c35b6fcd90-operator-scripts\") pod \"barbican-db-create-g8hc4\" (UID: \"67e2365b-5b83-408e-8b40-59c35b6fcd90\") " pod="openstack/barbican-db-create-g8hc4" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.026379 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhbcc\" (UniqueName: \"kubernetes.io/projected/67e2365b-5b83-408e-8b40-59c35b6fcd90-kube-api-access-nhbcc\") pod \"barbican-db-create-g8hc4\" (UID: \"67e2365b-5b83-408e-8b40-59c35b6fcd90\") " pod="openstack/barbican-db-create-g8hc4" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.026407 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcz6k\" (UniqueName: \"kubernetes.io/projected/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-kube-api-access-rcz6k\") pod \"barbican-3c99-account-create-vmvq6\" (UID: \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\") " pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.026428 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-operator-scripts\") pod \"barbican-3c99-account-create-vmvq6\" (UID: \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\") " pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.027149 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e2365b-5b83-408e-8b40-59c35b6fcd90-operator-scripts\") pod \"barbican-db-create-g8hc4\" (UID: \"67e2365b-5b83-408e-8b40-59c35b6fcd90\") " pod="openstack/barbican-db-create-g8hc4" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.075795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhbcc\" (UniqueName: \"kubernetes.io/projected/67e2365b-5b83-408e-8b40-59c35b6fcd90-kube-api-access-nhbcc\") pod \"barbican-db-create-g8hc4\" (UID: \"67e2365b-5b83-408e-8b40-59c35b6fcd90\") " pod="openstack/barbican-db-create-g8hc4" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.137434 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcz6k\" (UniqueName: \"kubernetes.io/projected/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-kube-api-access-rcz6k\") pod \"barbican-3c99-account-create-vmvq6\" (UID: \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\") " pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.137515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-operator-scripts\") pod \"barbican-3c99-account-create-vmvq6\" (UID: \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\") " pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.138519 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-operator-scripts\") pod \"barbican-3c99-account-create-vmvq6\" (UID: \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\") " pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.140075 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-95mnt"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.141830 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-95mnt" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.150114 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cfce-account-create-ss8dd"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.169446 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.179002 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.193319 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcz6k\" (UniqueName: \"kubernetes.io/projected/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-kube-api-access-rcz6k\") pod \"barbican-3c99-account-create-vmvq6\" (UID: \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\") " pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.214685 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g8hc4" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.217523 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-95mnt"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.239657 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-operator-scripts\") pod \"cinder-cfce-account-create-ss8dd\" (UID: \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\") " pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.239749 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm29t\" (UniqueName: \"kubernetes.io/projected/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-kube-api-access-xm29t\") pod \"cinder-db-create-95mnt\" (UID: \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\") " pod="openstack/cinder-db-create-95mnt" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.239806 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-operator-scripts\") pod \"cinder-db-create-95mnt\" (UID: \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\") " pod="openstack/cinder-db-create-95mnt" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.239865 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n52jr\" (UniqueName: \"kubernetes.io/projected/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-kube-api-access-n52jr\") pod \"cinder-cfce-account-create-ss8dd\" (UID: \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\") " pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.288058 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cfce-account-create-ss8dd"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.318200 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-stpn7"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.319833 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.325201 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.326052 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8z6zn" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.326276 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.326420 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.327087 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.332330 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-stpn7"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.341773 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-combined-ca-bundle\") pod \"keystone-db-sync-stpn7\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.341852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm29t\" (UniqueName: \"kubernetes.io/projected/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-kube-api-access-xm29t\") pod \"cinder-db-create-95mnt\" (UID: \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\") " pod="openstack/cinder-db-create-95mnt" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.341892 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxlwh\" (UniqueName: \"kubernetes.io/projected/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-kube-api-access-sxlwh\") pod \"keystone-db-sync-stpn7\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.341952 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-operator-scripts\") pod \"cinder-db-create-95mnt\" (UID: \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\") " pod="openstack/cinder-db-create-95mnt" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.341999 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-config-data\") pod \"keystone-db-sync-stpn7\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.342023 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n52jr\" (UniqueName: \"kubernetes.io/projected/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-kube-api-access-n52jr\") pod \"cinder-cfce-account-create-ss8dd\" (UID: \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\") " pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.342072 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-operator-scripts\") pod \"cinder-cfce-account-create-ss8dd\" (UID: \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\") " pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.342967 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-operator-scripts\") pod \"cinder-cfce-account-create-ss8dd\" (UID: \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\") " pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.342988 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-operator-scripts\") pod \"cinder-db-create-95mnt\" (UID: \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\") " pod="openstack/cinder-db-create-95mnt" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.366668 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n52jr\" (UniqueName: \"kubernetes.io/projected/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-kube-api-access-n52jr\") pod \"cinder-cfce-account-create-ss8dd\" (UID: \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\") " pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.387539 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm29t\" (UniqueName: \"kubernetes.io/projected/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-kube-api-access-xm29t\") pod \"cinder-db-create-95mnt\" (UID: \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\") " pod="openstack/cinder-db-create-95mnt" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.391189 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-p2wm7"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.392891 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p2wm7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.406669 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p2wm7"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.443746 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd9w4\" (UniqueName: \"kubernetes.io/projected/33029dfd-906d-425d-8266-d87ea1af419b-kube-api-access-jd9w4\") pod \"neutron-db-create-p2wm7\" (UID: \"33029dfd-906d-425d-8266-d87ea1af419b\") " pod="openstack/neutron-db-create-p2wm7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.443837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-config-data\") pod \"keystone-db-sync-stpn7\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.443975 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-combined-ca-bundle\") pod \"keystone-db-sync-stpn7\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.444000 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33029dfd-906d-425d-8266-d87ea1af419b-operator-scripts\") pod \"neutron-db-create-p2wm7\" (UID: \"33029dfd-906d-425d-8266-d87ea1af419b\") " pod="openstack/neutron-db-create-p2wm7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.444052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxlwh\" (UniqueName: \"kubernetes.io/projected/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-kube-api-access-sxlwh\") pod \"keystone-db-sync-stpn7\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.449088 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-config-data\") pod \"keystone-db-sync-stpn7\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.454139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-combined-ca-bundle\") pod \"keystone-db-sync-stpn7\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.466423 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxlwh\" (UniqueName: \"kubernetes.io/projected/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-kube-api-access-sxlwh\") pod \"keystone-db-sync-stpn7\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.542007 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-95mnt" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.545512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33029dfd-906d-425d-8266-d87ea1af419b-operator-scripts\") pod \"neutron-db-create-p2wm7\" (UID: \"33029dfd-906d-425d-8266-d87ea1af419b\") " pod="openstack/neutron-db-create-p2wm7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.545667 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd9w4\" (UniqueName: \"kubernetes.io/projected/33029dfd-906d-425d-8266-d87ea1af419b-kube-api-access-jd9w4\") pod \"neutron-db-create-p2wm7\" (UID: \"33029dfd-906d-425d-8266-d87ea1af419b\") " pod="openstack/neutron-db-create-p2wm7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.548994 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33029dfd-906d-425d-8266-d87ea1af419b-operator-scripts\") pod \"neutron-db-create-p2wm7\" (UID: \"33029dfd-906d-425d-8266-d87ea1af419b\") " pod="openstack/neutron-db-create-p2wm7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.562268 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.566190 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd9w4\" (UniqueName: \"kubernetes.io/projected/33029dfd-906d-425d-8266-d87ea1af419b-kube-api-access-jd9w4\") pod \"neutron-db-create-p2wm7\" (UID: \"33029dfd-906d-425d-8266-d87ea1af419b\") " pod="openstack/neutron-db-create-p2wm7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.648155 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-stpn7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.679537 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a7c2-account-create-w6tcb"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.681939 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.690013 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a7c2-account-create-w6tcb"] Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.801483 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p2wm7" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.809838 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.905435 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhzp7\" (UniqueName: \"kubernetes.io/projected/2739db56-54ae-4a4d-8941-5d27d9fbbd85-kube-api-access-fhzp7\") pod \"neutron-a7c2-account-create-w6tcb\" (UID: \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\") " pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:43:52 crc kubenswrapper[4796]: I1125 14:43:52.905492 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2739db56-54ae-4a4d-8941-5d27d9fbbd85-operator-scripts\") pod \"neutron-a7c2-account-create-w6tcb\" (UID: \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\") " pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:43:53 crc kubenswrapper[4796]: I1125 14:43:53.007559 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhzp7\" (UniqueName: \"kubernetes.io/projected/2739db56-54ae-4a4d-8941-5d27d9fbbd85-kube-api-access-fhzp7\") pod \"neutron-a7c2-account-create-w6tcb\" (UID: \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\") " pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:43:53 crc kubenswrapper[4796]: I1125 14:43:53.007940 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2739db56-54ae-4a4d-8941-5d27d9fbbd85-operator-scripts\") pod \"neutron-a7c2-account-create-w6tcb\" (UID: \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\") " pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:43:53 crc kubenswrapper[4796]: I1125 14:43:53.010335 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2739db56-54ae-4a4d-8941-5d27d9fbbd85-operator-scripts\") pod \"neutron-a7c2-account-create-w6tcb\" (UID: \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\") " pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:43:53 crc kubenswrapper[4796]: I1125 14:43:53.030182 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhzp7\" (UniqueName: \"kubernetes.io/projected/2739db56-54ae-4a4d-8941-5d27d9fbbd85-kube-api-access-fhzp7\") pod \"neutron-a7c2-account-create-w6tcb\" (UID: \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\") " pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:43:53 crc kubenswrapper[4796]: I1125 14:43:53.126115 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:44:00 crc kubenswrapper[4796]: W1125 14:44:00.091483 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49501e2a_5ad0_4de7_9b98_510c0c55863f.slice/crio-aff1bac54790cd40d88ceea2cdc991dfbc3682c3304efccaff0ce544af07b333 WatchSource:0}: Error finding container aff1bac54790cd40d88ceea2cdc991dfbc3682c3304efccaff0ce544af07b333: Status 404 returned error can't find the container with id aff1bac54790cd40d88ceea2cdc991dfbc3682c3304efccaff0ce544af07b333 Nov 25 14:44:00 crc kubenswrapper[4796]: E1125 14:44:00.318249 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 25 14:44:00 crc kubenswrapper[4796]: E1125 14:44:00.318880 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvfvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-kddwd_openstack(d3947d76-dff0-44d7-9b86-d2a0406db500): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:44:00 crc kubenswrapper[4796]: E1125 14:44:00.320462 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-kddwd" podUID="d3947d76-dff0-44d7-9b86-d2a0406db500" Nov 25 14:44:00 crc kubenswrapper[4796]: I1125 14:44:00.633926 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"aff1bac54790cd40d88ceea2cdc991dfbc3682c3304efccaff0ce544af07b333"} Nov 25 14:44:00 crc kubenswrapper[4796]: E1125 14:44:00.636373 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-kddwd" podUID="d3947d76-dff0-44d7-9b86-d2a0406db500" Nov 25 14:44:00 crc kubenswrapper[4796]: I1125 14:44:00.696189 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-g8hc4"] Nov 25 14:44:00 crc kubenswrapper[4796]: W1125 14:44:00.699503 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67e2365b_5b83_408e_8b40_59c35b6fcd90.slice/crio-1a1633d6c8b35e45cc54edbfc7bbff893914c72d80df9d20f4878ea576e70164 WatchSource:0}: Error finding container 1a1633d6c8b35e45cc54edbfc7bbff893914c72d80df9d20f4878ea576e70164: Status 404 returned error can't find the container with id 1a1633d6c8b35e45cc54edbfc7bbff893914c72d80df9d20f4878ea576e70164 Nov 25 14:44:00 crc kubenswrapper[4796]: W1125 14:44:00.700218 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33029dfd_906d_425d_8266_d87ea1af419b.slice/crio-cb345b2c85cb94b62fcebd89a6b5a175e4170770aef0aefa081dc1ce8a0d6735 WatchSource:0}: Error finding container cb345b2c85cb94b62fcebd89a6b5a175e4170770aef0aefa081dc1ce8a0d6735: Status 404 returned error can't find the container with id cb345b2c85cb94b62fcebd89a6b5a175e4170770aef0aefa081dc1ce8a0d6735 Nov 25 14:44:00 crc kubenswrapper[4796]: I1125 14:44:00.701715 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-p2wm7"] Nov 25 14:44:00 crc kubenswrapper[4796]: W1125 14:44:00.707723 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6107e4d3_3da4_4db6_9ec5_501f1b44c37c.slice/crio-83040c27a3bd6f7c2dcb3cd32ada0d7875436264760b0c6b50d28bf0964c414c WatchSource:0}: Error finding container 83040c27a3bd6f7c2dcb3cd32ada0d7875436264760b0c6b50d28bf0964c414c: Status 404 returned error can't find the container with id 83040c27a3bd6f7c2dcb3cd32ada0d7875436264760b0c6b50d28bf0964c414c Nov 25 14:44:00 crc kubenswrapper[4796]: I1125 14:44:00.707955 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3c99-account-create-vmvq6"] Nov 25 14:44:00 crc kubenswrapper[4796]: I1125 14:44:00.786897 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-95mnt"] Nov 25 14:44:00 crc kubenswrapper[4796]: W1125 14:44:00.797479 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51d3d4ee_22cb_4ec4_ad98_acfdf570ba21.slice/crio-2975b29d7d24bef2d0cef6606e3dc99d5e2f9ff878204df817b3e4dc25062302 WatchSource:0}: Error finding container 2975b29d7d24bef2d0cef6606e3dc99d5e2f9ff878204df817b3e4dc25062302: Status 404 returned error can't find the container with id 2975b29d7d24bef2d0cef6606e3dc99d5e2f9ff878204df817b3e4dc25062302 Nov 25 14:44:00 crc kubenswrapper[4796]: I1125 14:44:00.803038 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cfce-account-create-ss8dd"] Nov 25 14:44:00 crc kubenswrapper[4796]: W1125 14:44:00.811647 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe9578a6_b0e4_4efb_ae4b_86cd92008d5e.slice/crio-59d717000e619a73749ad21625696494438c56f8aa426c1d1c4cf3d945951da0 WatchSource:0}: Error finding container 59d717000e619a73749ad21625696494438c56f8aa426c1d1c4cf3d945951da0: Status 404 returned error can't find the container with id 59d717000e619a73749ad21625696494438c56f8aa426c1d1c4cf3d945951da0 Nov 25 14:44:00 crc kubenswrapper[4796]: I1125 14:44:00.813065 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a7c2-account-create-w6tcb"] Nov 25 14:44:00 crc kubenswrapper[4796]: I1125 14:44:00.965864 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-stpn7"] Nov 25 14:44:00 crc kubenswrapper[4796]: W1125 14:44:00.971076 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b64285_eaa9_4677_aa4c_a16f0cffc2f8.slice/crio-b15790d647a4e92915b1ac8afdb7f8c5b1f9087897e899a11afb90952aa7d99f WatchSource:0}: Error finding container b15790d647a4e92915b1ac8afdb7f8c5b1f9087897e899a11afb90952aa7d99f: Status 404 returned error can't find the container with id b15790d647a4e92915b1ac8afdb7f8c5b1f9087897e899a11afb90952aa7d99f Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.645647 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p2wm7" event={"ID":"33029dfd-906d-425d-8266-d87ea1af419b","Type":"ContainerStarted","Data":"1c27b68358aec3bb1d3a2ef1ffd4739013093372451fcc976cab987e7ebbc1a8"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.645888 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p2wm7" event={"ID":"33029dfd-906d-425d-8266-d87ea1af419b","Type":"ContainerStarted","Data":"cb345b2c85cb94b62fcebd89a6b5a175e4170770aef0aefa081dc1ce8a0d6735"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.649676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g8hc4" event={"ID":"67e2365b-5b83-408e-8b40-59c35b6fcd90","Type":"ContainerStarted","Data":"1e69a3294841638470b56e6ad1ccfb5caf614f4c65c282b6364ac89e54027846"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.649721 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g8hc4" event={"ID":"67e2365b-5b83-408e-8b40-59c35b6fcd90","Type":"ContainerStarted","Data":"1a1633d6c8b35e45cc54edbfc7bbff893914c72d80df9d20f4878ea576e70164"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.651186 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a7c2-account-create-w6tcb" event={"ID":"2739db56-54ae-4a4d-8941-5d27d9fbbd85","Type":"ContainerStarted","Data":"f8db123148bdac84a8e586915e7ceabce60f2251f24b77da9589e34572f96ebd"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.651246 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a7c2-account-create-w6tcb" event={"ID":"2739db56-54ae-4a4d-8941-5d27d9fbbd85","Type":"ContainerStarted","Data":"8c137fb599853509c79300046c2b23e08b2a7006fbb8924356da693e854db990"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.653815 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-95mnt" event={"ID":"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21","Type":"ContainerStarted","Data":"224c0e5523c636191053b8b4316851dddedf503ddfc2a5c355675113acbb04d4"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.653852 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-95mnt" event={"ID":"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21","Type":"ContainerStarted","Data":"2975b29d7d24bef2d0cef6606e3dc99d5e2f9ff878204df817b3e4dc25062302"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.655839 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cfce-account-create-ss8dd" event={"ID":"be9578a6-b0e4-4efb-ae4b-86cd92008d5e","Type":"ContainerStarted","Data":"b6051fdc8b9ed05f0002d452706c8aa881a72e0bbfadbb66a4254c29f3d43b99"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.655896 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cfce-account-create-ss8dd" event={"ID":"be9578a6-b0e4-4efb-ae4b-86cd92008d5e","Type":"ContainerStarted","Data":"59d717000e619a73749ad21625696494438c56f8aa426c1d1c4cf3d945951da0"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.658732 4796 generic.go:334] "Generic (PLEG): container finished" podID="6107e4d3-3da4-4db6-9ec5-501f1b44c37c" containerID="a4aaeb2e8eadd4938a7265faf28c9f569b1f96a0ed61e46d7834f8910c05b4f4" exitCode=0 Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.658832 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c99-account-create-vmvq6" event={"ID":"6107e4d3-3da4-4db6-9ec5-501f1b44c37c","Type":"ContainerDied","Data":"a4aaeb2e8eadd4938a7265faf28c9f569b1f96a0ed61e46d7834f8910c05b4f4"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.658878 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c99-account-create-vmvq6" event={"ID":"6107e4d3-3da4-4db6-9ec5-501f1b44c37c","Type":"ContainerStarted","Data":"83040c27a3bd6f7c2dcb3cd32ada0d7875436264760b0c6b50d28bf0964c414c"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.660425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-stpn7" event={"ID":"02b64285-eaa9-4677-aa4c-a16f0cffc2f8","Type":"ContainerStarted","Data":"b15790d647a4e92915b1ac8afdb7f8c5b1f9087897e899a11afb90952aa7d99f"} Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.668563 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-p2wm7" podStartSLOduration=9.668544477 podStartE2EDuration="9.668544477s" podCreationTimestamp="2025-11-25 14:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:44:01.663404986 +0000 UTC m=+1170.006514460" watchObservedRunningTime="2025-11-25 14:44:01.668544477 +0000 UTC m=+1170.011653911" Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.700086 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cfce-account-create-ss8dd" podStartSLOduration=9.70006661 podStartE2EDuration="9.70006661s" podCreationTimestamp="2025-11-25 14:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:44:01.694304117 +0000 UTC m=+1170.037413541" watchObservedRunningTime="2025-11-25 14:44:01.70006661 +0000 UTC m=+1170.043176034" Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.719600 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-g8hc4" podStartSLOduration=10.71956699 podStartE2EDuration="10.71956699s" podCreationTimestamp="2025-11-25 14:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:44:01.70997557 +0000 UTC m=+1170.053085004" watchObservedRunningTime="2025-11-25 14:44:01.71956699 +0000 UTC m=+1170.062676414" Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.734462 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-95mnt" podStartSLOduration=9.734443466 podStartE2EDuration="9.734443466s" podCreationTimestamp="2025-11-25 14:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:44:01.724458223 +0000 UTC m=+1170.067567667" watchObservedRunningTime="2025-11-25 14:44:01.734443466 +0000 UTC m=+1170.077552890" Nov 25 14:44:01 crc kubenswrapper[4796]: I1125 14:44:01.751510 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-a7c2-account-create-w6tcb" podStartSLOduration=9.751491985 podStartE2EDuration="9.751491985s" podCreationTimestamp="2025-11-25 14:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:44:01.75011562 +0000 UTC m=+1170.093225104" watchObservedRunningTime="2025-11-25 14:44:01.751491985 +0000 UTC m=+1170.094601409" Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.670879 4796 generic.go:334] "Generic (PLEG): container finished" podID="33029dfd-906d-425d-8266-d87ea1af419b" containerID="1c27b68358aec3bb1d3a2ef1ffd4739013093372451fcc976cab987e7ebbc1a8" exitCode=0 Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.671355 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p2wm7" event={"ID":"33029dfd-906d-425d-8266-d87ea1af419b","Type":"ContainerDied","Data":"1c27b68358aec3bb1d3a2ef1ffd4739013093372451fcc976cab987e7ebbc1a8"} Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.674240 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"142c73a950be0cd1af3804cc8f44033c7e6b950c77841a2004c6397afa088016"} Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.678590 4796 generic.go:334] "Generic (PLEG): container finished" podID="2739db56-54ae-4a4d-8941-5d27d9fbbd85" containerID="f8db123148bdac84a8e586915e7ceabce60f2251f24b77da9589e34572f96ebd" exitCode=0 Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.678682 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a7c2-account-create-w6tcb" event={"ID":"2739db56-54ae-4a4d-8941-5d27d9fbbd85","Type":"ContainerDied","Data":"f8db123148bdac84a8e586915e7ceabce60f2251f24b77da9589e34572f96ebd"} Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.683077 4796 generic.go:334] "Generic (PLEG): container finished" podID="67e2365b-5b83-408e-8b40-59c35b6fcd90" containerID="1e69a3294841638470b56e6ad1ccfb5caf614f4c65c282b6364ac89e54027846" exitCode=0 Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.683184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g8hc4" event={"ID":"67e2365b-5b83-408e-8b40-59c35b6fcd90","Type":"ContainerDied","Data":"1e69a3294841638470b56e6ad1ccfb5caf614f4c65c282b6364ac89e54027846"} Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.705071 4796 generic.go:334] "Generic (PLEG): container finished" podID="51d3d4ee-22cb-4ec4-ad98-acfdf570ba21" containerID="224c0e5523c636191053b8b4316851dddedf503ddfc2a5c355675113acbb04d4" exitCode=0 Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.705208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-95mnt" event={"ID":"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21","Type":"ContainerDied","Data":"224c0e5523c636191053b8b4316851dddedf503ddfc2a5c355675113acbb04d4"} Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.711543 4796 generic.go:334] "Generic (PLEG): container finished" podID="be9578a6-b0e4-4efb-ae4b-86cd92008d5e" containerID="b6051fdc8b9ed05f0002d452706c8aa881a72e0bbfadbb66a4254c29f3d43b99" exitCode=0 Nov 25 14:44:02 crc kubenswrapper[4796]: I1125 14:44:02.713537 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cfce-account-create-ss8dd" event={"ID":"be9578a6-b0e4-4efb-ae4b-86cd92008d5e","Type":"ContainerDied","Data":"b6051fdc8b9ed05f0002d452706c8aa881a72e0bbfadbb66a4254c29f3d43b99"} Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.023681 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.072101 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcz6k\" (UniqueName: \"kubernetes.io/projected/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-kube-api-access-rcz6k\") pod \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\" (UID: \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\") " Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.072877 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-operator-scripts\") pod \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\" (UID: \"6107e4d3-3da4-4db6-9ec5-501f1b44c37c\") " Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.074088 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6107e4d3-3da4-4db6-9ec5-501f1b44c37c" (UID: "6107e4d3-3da4-4db6-9ec5-501f1b44c37c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.087636 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-kube-api-access-rcz6k" (OuterVolumeSpecName: "kube-api-access-rcz6k") pod "6107e4d3-3da4-4db6-9ec5-501f1b44c37c" (UID: "6107e4d3-3da4-4db6-9ec5-501f1b44c37c"). InnerVolumeSpecName "kube-api-access-rcz6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.175825 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.175866 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcz6k\" (UniqueName: \"kubernetes.io/projected/6107e4d3-3da4-4db6-9ec5-501f1b44c37c-kube-api-access-rcz6k\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.723248 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"aace218b4e648f536a3d7e91bc4d9ba6352cc6634bb3f945fbbeaa31632aa7ff"} Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.723299 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"33fc540fc4a83bf1f88a2ad2762d175e65e5c6473e9bdfccc4974d6ce43f6dc3"} Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.723310 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"7e195d09df296eeff0e77b11941235f815dd446349ca5d5776a54bc9297abff4"} Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.725214 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3c99-account-create-vmvq6" Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.725224 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3c99-account-create-vmvq6" event={"ID":"6107e4d3-3da4-4db6-9ec5-501f1b44c37c","Type":"ContainerDied","Data":"83040c27a3bd6f7c2dcb3cd32ada0d7875436264760b0c6b50d28bf0964c414c"} Nov 25 14:44:03 crc kubenswrapper[4796]: I1125 14:44:03.725461 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83040c27a3bd6f7c2dcb3cd32ada0d7875436264760b0c6b50d28bf0964c414c" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.458230 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-95mnt" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.467326 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g8hc4" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.495531 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.521102 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.521598 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p2wm7" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-operator-scripts\") pod \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\" (UID: \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550430 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhzp7\" (UniqueName: \"kubernetes.io/projected/2739db56-54ae-4a4d-8941-5d27d9fbbd85-kube-api-access-fhzp7\") pod \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\" (UID: \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550497 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd9w4\" (UniqueName: \"kubernetes.io/projected/33029dfd-906d-425d-8266-d87ea1af419b-kube-api-access-jd9w4\") pod \"33029dfd-906d-425d-8266-d87ea1af419b\" (UID: \"33029dfd-906d-425d-8266-d87ea1af419b\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550530 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-operator-scripts\") pod \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\" (UID: \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550547 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e2365b-5b83-408e-8b40-59c35b6fcd90-operator-scripts\") pod \"67e2365b-5b83-408e-8b40-59c35b6fcd90\" (UID: \"67e2365b-5b83-408e-8b40-59c35b6fcd90\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550609 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33029dfd-906d-425d-8266-d87ea1af419b-operator-scripts\") pod \"33029dfd-906d-425d-8266-d87ea1af419b\" (UID: \"33029dfd-906d-425d-8266-d87ea1af419b\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550651 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n52jr\" (UniqueName: \"kubernetes.io/projected/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-kube-api-access-n52jr\") pod \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\" (UID: \"be9578a6-b0e4-4efb-ae4b-86cd92008d5e\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550727 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhbcc\" (UniqueName: \"kubernetes.io/projected/67e2365b-5b83-408e-8b40-59c35b6fcd90-kube-api-access-nhbcc\") pod \"67e2365b-5b83-408e-8b40-59c35b6fcd90\" (UID: \"67e2365b-5b83-408e-8b40-59c35b6fcd90\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550769 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2739db56-54ae-4a4d-8941-5d27d9fbbd85-operator-scripts\") pod \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\" (UID: \"2739db56-54ae-4a4d-8941-5d27d9fbbd85\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.550841 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm29t\" (UniqueName: \"kubernetes.io/projected/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-kube-api-access-xm29t\") pod \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\" (UID: \"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21\") " Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.551482 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be9578a6-b0e4-4efb-ae4b-86cd92008d5e" (UID: "be9578a6-b0e4-4efb-ae4b-86cd92008d5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.551519 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33029dfd-906d-425d-8266-d87ea1af419b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33029dfd-906d-425d-8266-d87ea1af419b" (UID: "33029dfd-906d-425d-8266-d87ea1af419b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.552149 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51d3d4ee-22cb-4ec4-ad98-acfdf570ba21" (UID: "51d3d4ee-22cb-4ec4-ad98-acfdf570ba21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.552164 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e2365b-5b83-408e-8b40-59c35b6fcd90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67e2365b-5b83-408e-8b40-59c35b6fcd90" (UID: "67e2365b-5b83-408e-8b40-59c35b6fcd90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.552589 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2739db56-54ae-4a4d-8941-5d27d9fbbd85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2739db56-54ae-4a4d-8941-5d27d9fbbd85" (UID: "2739db56-54ae-4a4d-8941-5d27d9fbbd85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.556871 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-kube-api-access-n52jr" (OuterVolumeSpecName: "kube-api-access-n52jr") pod "be9578a6-b0e4-4efb-ae4b-86cd92008d5e" (UID: "be9578a6-b0e4-4efb-ae4b-86cd92008d5e"). InnerVolumeSpecName "kube-api-access-n52jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.560569 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2739db56-54ae-4a4d-8941-5d27d9fbbd85-kube-api-access-fhzp7" (OuterVolumeSpecName: "kube-api-access-fhzp7") pod "2739db56-54ae-4a4d-8941-5d27d9fbbd85" (UID: "2739db56-54ae-4a4d-8941-5d27d9fbbd85"). InnerVolumeSpecName "kube-api-access-fhzp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.562615 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33029dfd-906d-425d-8266-d87ea1af419b-kube-api-access-jd9w4" (OuterVolumeSpecName: "kube-api-access-jd9w4") pod "33029dfd-906d-425d-8266-d87ea1af419b" (UID: "33029dfd-906d-425d-8266-d87ea1af419b"). InnerVolumeSpecName "kube-api-access-jd9w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.573040 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e2365b-5b83-408e-8b40-59c35b6fcd90-kube-api-access-nhbcc" (OuterVolumeSpecName: "kube-api-access-nhbcc") pod "67e2365b-5b83-408e-8b40-59c35b6fcd90" (UID: "67e2365b-5b83-408e-8b40-59c35b6fcd90"). InnerVolumeSpecName "kube-api-access-nhbcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.575870 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-kube-api-access-xm29t" (OuterVolumeSpecName: "kube-api-access-xm29t") pod "51d3d4ee-22cb-4ec4-ad98-acfdf570ba21" (UID: "51d3d4ee-22cb-4ec4-ad98-acfdf570ba21"). InnerVolumeSpecName "kube-api-access-xm29t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.653865 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm29t\" (UniqueName: \"kubernetes.io/projected/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-kube-api-access-xm29t\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.654005 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.654022 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhzp7\" (UniqueName: \"kubernetes.io/projected/2739db56-54ae-4a4d-8941-5d27d9fbbd85-kube-api-access-fhzp7\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.654032 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd9w4\" (UniqueName: \"kubernetes.io/projected/33029dfd-906d-425d-8266-d87ea1af419b-kube-api-access-jd9w4\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.654044 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.654057 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e2365b-5b83-408e-8b40-59c35b6fcd90-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.654067 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33029dfd-906d-425d-8266-d87ea1af419b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.654077 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n52jr\" (UniqueName: \"kubernetes.io/projected/be9578a6-b0e4-4efb-ae4b-86cd92008d5e-kube-api-access-n52jr\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.654088 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhbcc\" (UniqueName: \"kubernetes.io/projected/67e2365b-5b83-408e-8b40-59c35b6fcd90-kube-api-access-nhbcc\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.654098 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2739db56-54ae-4a4d-8941-5d27d9fbbd85-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.754564 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-p2wm7" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.754586 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-p2wm7" event={"ID":"33029dfd-906d-425d-8266-d87ea1af419b","Type":"ContainerDied","Data":"cb345b2c85cb94b62fcebd89a6b5a175e4170770aef0aefa081dc1ce8a0d6735"} Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.754716 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb345b2c85cb94b62fcebd89a6b5a175e4170770aef0aefa081dc1ce8a0d6735" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.756362 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a7c2-account-create-w6tcb" event={"ID":"2739db56-54ae-4a4d-8941-5d27d9fbbd85","Type":"ContainerDied","Data":"8c137fb599853509c79300046c2b23e08b2a7006fbb8924356da693e854db990"} Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.756403 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c137fb599853509c79300046c2b23e08b2a7006fbb8924356da693e854db990" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.756411 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a7c2-account-create-w6tcb" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.757717 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g8hc4" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.757728 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g8hc4" event={"ID":"67e2365b-5b83-408e-8b40-59c35b6fcd90","Type":"ContainerDied","Data":"1a1633d6c8b35e45cc54edbfc7bbff893914c72d80df9d20f4878ea576e70164"} Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.757750 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a1633d6c8b35e45cc54edbfc7bbff893914c72d80df9d20f4878ea576e70164" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.759978 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-95mnt" event={"ID":"51d3d4ee-22cb-4ec4-ad98-acfdf570ba21","Type":"ContainerDied","Data":"2975b29d7d24bef2d0cef6606e3dc99d5e2f9ff878204df817b3e4dc25062302"} Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.760017 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2975b29d7d24bef2d0cef6606e3dc99d5e2f9ff878204df817b3e4dc25062302" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.760095 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-95mnt" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.763318 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cfce-account-create-ss8dd" event={"ID":"be9578a6-b0e4-4efb-ae4b-86cd92008d5e","Type":"ContainerDied","Data":"59d717000e619a73749ad21625696494438c56f8aa426c1d1c4cf3d945951da0"} Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.763353 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d717000e619a73749ad21625696494438c56f8aa426c1d1c4cf3d945951da0" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.763407 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cfce-account-create-ss8dd" Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.769779 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-stpn7" event={"ID":"02b64285-eaa9-4677-aa4c-a16f0cffc2f8","Type":"ContainerStarted","Data":"9273adb8a7b2702f78b3ff186c214371c06a57b8d66d3d1ae12bc29558f29507"} Nov 25 14:44:06 crc kubenswrapper[4796]: I1125 14:44:06.788724 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-stpn7" podStartSLOduration=9.452518448 podStartE2EDuration="14.788700901s" podCreationTimestamp="2025-11-25 14:43:52 +0000 UTC" firstStartedPulling="2025-11-25 14:44:00.972922822 +0000 UTC m=+1169.316032246" lastFinishedPulling="2025-11-25 14:44:06.309105275 +0000 UTC m=+1174.652214699" observedRunningTime="2025-11-25 14:44:06.784610105 +0000 UTC m=+1175.127719539" watchObservedRunningTime="2025-11-25 14:44:06.788700901 +0000 UTC m=+1175.131810325" Nov 25 14:44:07 crc kubenswrapper[4796]: I1125 14:44:07.800150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"beb9cca6927a9e4b715a3572c03d197fd355ba50cf5ec024fcb1144800d2388f"} Nov 25 14:44:07 crc kubenswrapper[4796]: I1125 14:44:07.800752 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"19d0948e1d3e32c43d5f20dd4bf0e1b926d0cb4c791308b0048912cce0615534"} Nov 25 14:44:07 crc kubenswrapper[4796]: I1125 14:44:07.800769 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"b82c05a56889be3d10b93379df63f65636d825db26fc21e41e4220b5d57e9fff"} Nov 25 14:44:08 crc kubenswrapper[4796]: I1125 14:44:08.811751 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"8f06fd20e4f3201028e4fb4efd0c9d4b2a11bed23c1e54593745175ef0919f33"} Nov 25 14:44:10 crc kubenswrapper[4796]: I1125 14:44:10.836543 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"d07e8fabf0752d56988d4bb29294048a09a148f22207775a26f39941e13cb83b"} Nov 25 14:44:10 crc kubenswrapper[4796]: I1125 14:44:10.836862 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"452d1c5e9768d64ca58cacfa8e76ea3219b8400b66c8e6946c6a84974a650cde"} Nov 25 14:44:11 crc kubenswrapper[4796]: I1125 14:44:11.854546 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"eaea129eefada10edeccd3da54b20e9e507833b45498ff9420ce7a94f7f1cc0a"} Nov 25 14:44:11 crc kubenswrapper[4796]: I1125 14:44:11.855323 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"804cc5b85a39fc8bb923cd8b7e28a03c267e55e0e40e1e1947654ebed858c29e"} Nov 25 14:44:11 crc kubenswrapper[4796]: I1125 14:44:11.855349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"f9fdde718a1a296e91bb3eb1d6ab2e3c0fcfcc9fdf7dd01a9014ba867ccc48dc"} Nov 25 14:44:12 crc kubenswrapper[4796]: I1125 14:44:12.867652 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"83bd8b32a9abcc7433508ce915eef520843b479689216104452dfa54906310cb"} Nov 25 14:44:12 crc kubenswrapper[4796]: I1125 14:44:12.868008 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49501e2a-5ad0-4de7-9b98-510c0c55863f","Type":"ContainerStarted","Data":"f2fe617e96b4074b944968bdf3f376828ce802d44fefba0cfb2a8003602de0ec"} Nov 25 14:44:12 crc kubenswrapper[4796]: I1125 14:44:12.932180 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=46.083511698 podStartE2EDuration="55.932159373s" podCreationTimestamp="2025-11-25 14:43:17 +0000 UTC" firstStartedPulling="2025-11-25 14:44:00.110787321 +0000 UTC m=+1168.453896745" lastFinishedPulling="2025-11-25 14:44:09.959434996 +0000 UTC m=+1178.302544420" observedRunningTime="2025-11-25 14:44:12.929253782 +0000 UTC m=+1181.272363206" watchObservedRunningTime="2025-11-25 14:44:12.932159373 +0000 UTC m=+1181.275268787" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.192134 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-79zq7"] Nov 25 14:44:13 crc kubenswrapper[4796]: E1125 14:44:13.192927 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d3d4ee-22cb-4ec4-ad98-acfdf570ba21" containerName="mariadb-database-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.192948 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d3d4ee-22cb-4ec4-ad98-acfdf570ba21" containerName="mariadb-database-create" Nov 25 14:44:13 crc kubenswrapper[4796]: E1125 14:44:13.192983 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e2365b-5b83-408e-8b40-59c35b6fcd90" containerName="mariadb-database-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.192990 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e2365b-5b83-408e-8b40-59c35b6fcd90" containerName="mariadb-database-create" Nov 25 14:44:13 crc kubenswrapper[4796]: E1125 14:44:13.193007 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9578a6-b0e4-4efb-ae4b-86cd92008d5e" containerName="mariadb-account-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193014 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9578a6-b0e4-4efb-ae4b-86cd92008d5e" containerName="mariadb-account-create" Nov 25 14:44:13 crc kubenswrapper[4796]: E1125 14:44:13.193031 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6107e4d3-3da4-4db6-9ec5-501f1b44c37c" containerName="mariadb-account-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193038 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6107e4d3-3da4-4db6-9ec5-501f1b44c37c" containerName="mariadb-account-create" Nov 25 14:44:13 crc kubenswrapper[4796]: E1125 14:44:13.193056 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33029dfd-906d-425d-8266-d87ea1af419b" containerName="mariadb-database-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193063 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="33029dfd-906d-425d-8266-d87ea1af419b" containerName="mariadb-database-create" Nov 25 14:44:13 crc kubenswrapper[4796]: E1125 14:44:13.193071 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2739db56-54ae-4a4d-8941-5d27d9fbbd85" containerName="mariadb-account-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193078 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2739db56-54ae-4a4d-8941-5d27d9fbbd85" containerName="mariadb-account-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193309 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e2365b-5b83-408e-8b40-59c35b6fcd90" containerName="mariadb-database-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193336 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2739db56-54ae-4a4d-8941-5d27d9fbbd85" containerName="mariadb-account-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193361 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6107e4d3-3da4-4db6-9ec5-501f1b44c37c" containerName="mariadb-account-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193377 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d3d4ee-22cb-4ec4-ad98-acfdf570ba21" containerName="mariadb-database-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193397 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9578a6-b0e4-4efb-ae4b-86cd92008d5e" containerName="mariadb-account-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.193409 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="33029dfd-906d-425d-8266-d87ea1af419b" containerName="mariadb-database-create" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.194604 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.196947 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.211672 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-79zq7"] Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.277857 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.277911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.278013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.278122 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-config\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.278184 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.278223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lzld\" (UniqueName: \"kubernetes.io/projected/821175f1-a773-4def-b744-22423894346c-kube-api-access-4lzld\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.379295 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.379347 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lzld\" (UniqueName: \"kubernetes.io/projected/821175f1-a773-4def-b744-22423894346c-kube-api-access-4lzld\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.379385 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.379409 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.379435 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.379489 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-config\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.380430 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-config\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.380493 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.380492 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.380601 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-svc\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.380676 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.405642 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lzld\" (UniqueName: \"kubernetes.io/projected/821175f1-a773-4def-b744-22423894346c-kube-api-access-4lzld\") pod \"dnsmasq-dns-764c5664d7-79zq7\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.518742 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:13 crc kubenswrapper[4796]: I1125 14:44:13.976734 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-79zq7"] Nov 25 14:44:13 crc kubenswrapper[4796]: W1125 14:44:13.983856 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod821175f1_a773_4def_b744_22423894346c.slice/crio-1c32acd170122510742e9ed99066c9bd9e885d618b85ba6c1ab9bcd0850bd809 WatchSource:0}: Error finding container 1c32acd170122510742e9ed99066c9bd9e885d618b85ba6c1ab9bcd0850bd809: Status 404 returned error can't find the container with id 1c32acd170122510742e9ed99066c9bd9e885d618b85ba6c1ab9bcd0850bd809 Nov 25 14:44:14 crc kubenswrapper[4796]: I1125 14:44:14.884021 4796 generic.go:334] "Generic (PLEG): container finished" podID="821175f1-a773-4def-b744-22423894346c" containerID="ded7f77298f21026d964cd45b910c7c502561b7fa19e3af2bf858ab3261ce24e" exitCode=0 Nov 25 14:44:14 crc kubenswrapper[4796]: I1125 14:44:14.884197 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" event={"ID":"821175f1-a773-4def-b744-22423894346c","Type":"ContainerDied","Data":"ded7f77298f21026d964cd45b910c7c502561b7fa19e3af2bf858ab3261ce24e"} Nov 25 14:44:14 crc kubenswrapper[4796]: I1125 14:44:14.884374 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" event={"ID":"821175f1-a773-4def-b744-22423894346c","Type":"ContainerStarted","Data":"1c32acd170122510742e9ed99066c9bd9e885d618b85ba6c1ab9bcd0850bd809"} Nov 25 14:44:15 crc kubenswrapper[4796]: I1125 14:44:15.898118 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kddwd" event={"ID":"d3947d76-dff0-44d7-9b86-d2a0406db500","Type":"ContainerStarted","Data":"3d3e0703fe13c0316f521fa3c9dd09f327a2ea1690e7152d4c9b7354c6c0c159"} Nov 25 14:44:15 crc kubenswrapper[4796]: I1125 14:44:15.900349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" event={"ID":"821175f1-a773-4def-b744-22423894346c","Type":"ContainerStarted","Data":"c9e9dcd9e4f1f98516f78c5ee8e84d685accb67b14f1d0d698a9cfca628fd119"} Nov 25 14:44:15 crc kubenswrapper[4796]: I1125 14:44:15.900511 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:15 crc kubenswrapper[4796]: I1125 14:44:15.917483 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kddwd" podStartSLOduration=2.222348817 podStartE2EDuration="30.917458976s" podCreationTimestamp="2025-11-25 14:43:45 +0000 UTC" firstStartedPulling="2025-11-25 14:43:46.555118712 +0000 UTC m=+1154.898228136" lastFinishedPulling="2025-11-25 14:44:15.250228871 +0000 UTC m=+1183.593338295" observedRunningTime="2025-11-25 14:44:15.913850235 +0000 UTC m=+1184.256959659" watchObservedRunningTime="2025-11-25 14:44:15.917458976 +0000 UTC m=+1184.260568400" Nov 25 14:44:15 crc kubenswrapper[4796]: I1125 14:44:15.940056 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" podStartSLOduration=2.940035778 podStartE2EDuration="2.940035778s" podCreationTimestamp="2025-11-25 14:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:44:15.934472515 +0000 UTC m=+1184.277581949" watchObservedRunningTime="2025-11-25 14:44:15.940035778 +0000 UTC m=+1184.283145202" Nov 25 14:44:19 crc kubenswrapper[4796]: I1125 14:44:19.514503 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:44:19 crc kubenswrapper[4796]: I1125 14:44:19.515924 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:44:19 crc kubenswrapper[4796]: I1125 14:44:19.516038 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:44:19 crc kubenswrapper[4796]: I1125 14:44:19.518801 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8401d6a31e01e755468ba5162268a2636f0971d1abaa35f12c5126e3ee6beb3c"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:44:19 crc kubenswrapper[4796]: I1125 14:44:19.518929 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://8401d6a31e01e755468ba5162268a2636f0971d1abaa35f12c5126e3ee6beb3c" gracePeriod=600 Nov 25 14:44:19 crc kubenswrapper[4796]: E1125 14:44:19.815304 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc683b765_b1f2_49b1_b29d_6466cda73ca8.slice/crio-conmon-8401d6a31e01e755468ba5162268a2636f0971d1abaa35f12c5126e3ee6beb3c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc683b765_b1f2_49b1_b29d_6466cda73ca8.slice/crio-8401d6a31e01e755468ba5162268a2636f0971d1abaa35f12c5126e3ee6beb3c.scope\": RecentStats: unable to find data in memory cache]" Nov 25 14:44:19 crc kubenswrapper[4796]: I1125 14:44:19.946264 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="8401d6a31e01e755468ba5162268a2636f0971d1abaa35f12c5126e3ee6beb3c" exitCode=0 Nov 25 14:44:19 crc kubenswrapper[4796]: I1125 14:44:19.946607 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"8401d6a31e01e755468ba5162268a2636f0971d1abaa35f12c5126e3ee6beb3c"} Nov 25 14:44:19 crc kubenswrapper[4796]: I1125 14:44:19.946690 4796 scope.go:117] "RemoveContainer" containerID="6ee44c58110c9589c1b970cfd7a594ab20931e9987e3c25566f0b8fb802d3fc7" Nov 25 14:44:20 crc kubenswrapper[4796]: I1125 14:44:20.958345 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"b89880c276465411a1df30a0fcd1ff1a63ffefdebe8f12fb134cee10c6604130"} Nov 25 14:44:23 crc kubenswrapper[4796]: I1125 14:44:23.520875 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:44:23 crc kubenswrapper[4796]: I1125 14:44:23.602565 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nmsz6"] Nov 25 14:44:23 crc kubenswrapper[4796]: I1125 14:44:23.602846 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-nmsz6" podUID="099992bc-6139-4064-b84d-7f9c319026d9" containerName="dnsmasq-dns" containerID="cri-o://7e44095bfee5b8037ffcd116956bbfd491e02e86a03231cdddebd960e025836f" gracePeriod=10 Nov 25 14:44:23 crc kubenswrapper[4796]: I1125 14:44:23.990307 4796 generic.go:334] "Generic (PLEG): container finished" podID="099992bc-6139-4064-b84d-7f9c319026d9" containerID="7e44095bfee5b8037ffcd116956bbfd491e02e86a03231cdddebd960e025836f" exitCode=0 Nov 25 14:44:23 crc kubenswrapper[4796]: I1125 14:44:23.990354 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nmsz6" event={"ID":"099992bc-6139-4064-b84d-7f9c319026d9","Type":"ContainerDied","Data":"7e44095bfee5b8037ffcd116956bbfd491e02e86a03231cdddebd960e025836f"} Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.111444 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.265644 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-sb\") pod \"099992bc-6139-4064-b84d-7f9c319026d9\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.265734 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxlqw\" (UniqueName: \"kubernetes.io/projected/099992bc-6139-4064-b84d-7f9c319026d9-kube-api-access-jxlqw\") pod \"099992bc-6139-4064-b84d-7f9c319026d9\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.265791 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-config\") pod \"099992bc-6139-4064-b84d-7f9c319026d9\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.265855 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-nb\") pod \"099992bc-6139-4064-b84d-7f9c319026d9\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.266690 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-dns-svc\") pod \"099992bc-6139-4064-b84d-7f9c319026d9\" (UID: \"099992bc-6139-4064-b84d-7f9c319026d9\") " Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.272773 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099992bc-6139-4064-b84d-7f9c319026d9-kube-api-access-jxlqw" (OuterVolumeSpecName: "kube-api-access-jxlqw") pod "099992bc-6139-4064-b84d-7f9c319026d9" (UID: "099992bc-6139-4064-b84d-7f9c319026d9"). InnerVolumeSpecName "kube-api-access-jxlqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.311946 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "099992bc-6139-4064-b84d-7f9c319026d9" (UID: "099992bc-6139-4064-b84d-7f9c319026d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.321382 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "099992bc-6139-4064-b84d-7f9c319026d9" (UID: "099992bc-6139-4064-b84d-7f9c319026d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.325750 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "099992bc-6139-4064-b84d-7f9c319026d9" (UID: "099992bc-6139-4064-b84d-7f9c319026d9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.326061 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-config" (OuterVolumeSpecName: "config") pod "099992bc-6139-4064-b84d-7f9c319026d9" (UID: "099992bc-6139-4064-b84d-7f9c319026d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.369345 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.369427 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxlqw\" (UniqueName: \"kubernetes.io/projected/099992bc-6139-4064-b84d-7f9c319026d9-kube-api-access-jxlqw\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.369710 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.369722 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:24 crc kubenswrapper[4796]: I1125 14:44:24.369732 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/099992bc-6139-4064-b84d-7f9c319026d9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:44:25 crc kubenswrapper[4796]: I1125 14:44:25.005142 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nmsz6" event={"ID":"099992bc-6139-4064-b84d-7f9c319026d9","Type":"ContainerDied","Data":"e9334234c732ddac739ebc4120dcdaf7559b31eb390f4a80ea3ef14fc523a461"} Nov 25 14:44:25 crc kubenswrapper[4796]: I1125 14:44:25.005199 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nmsz6" Nov 25 14:44:25 crc kubenswrapper[4796]: I1125 14:44:25.005236 4796 scope.go:117] "RemoveContainer" containerID="7e44095bfee5b8037ffcd116956bbfd491e02e86a03231cdddebd960e025836f" Nov 25 14:44:25 crc kubenswrapper[4796]: I1125 14:44:25.039697 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nmsz6"] Nov 25 14:44:25 crc kubenswrapper[4796]: I1125 14:44:25.052238 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nmsz6"] Nov 25 14:44:25 crc kubenswrapper[4796]: I1125 14:44:25.893334 4796 scope.go:117] "RemoveContainer" containerID="6daaca2e7aff3f704579e5522c48eb970abd16dcd86ec4dafd170c77dc00e0c4" Nov 25 14:44:26 crc kubenswrapper[4796]: I1125 14:44:26.900382 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099992bc-6139-4064-b84d-7f9c319026d9" path="/var/lib/kubelet/pods/099992bc-6139-4064-b84d-7f9c319026d9/volumes" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.160799 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx"] Nov 25 14:45:00 crc kubenswrapper[4796]: E1125 14:45:00.161966 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099992bc-6139-4064-b84d-7f9c319026d9" containerName="init" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.161989 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="099992bc-6139-4064-b84d-7f9c319026d9" containerName="init" Nov 25 14:45:00 crc kubenswrapper[4796]: E1125 14:45:00.162014 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099992bc-6139-4064-b84d-7f9c319026d9" containerName="dnsmasq-dns" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.162024 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="099992bc-6139-4064-b84d-7f9c319026d9" containerName="dnsmasq-dns" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.162234 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="099992bc-6139-4064-b84d-7f9c319026d9" containerName="dnsmasq-dns" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.162907 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.166605 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.168261 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.174818 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx"] Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.283116 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34fd65b2-a144-49bd-8e5d-5cc42a812348-secret-volume\") pod \"collect-profiles-29401365-tfbfx\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.283204 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34fd65b2-a144-49bd-8e5d-5cc42a812348-config-volume\") pod \"collect-profiles-29401365-tfbfx\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.283457 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4d2\" (UniqueName: \"kubernetes.io/projected/34fd65b2-a144-49bd-8e5d-5cc42a812348-kube-api-access-2g4d2\") pod \"collect-profiles-29401365-tfbfx\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.385005 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34fd65b2-a144-49bd-8e5d-5cc42a812348-secret-volume\") pod \"collect-profiles-29401365-tfbfx\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.385099 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34fd65b2-a144-49bd-8e5d-5cc42a812348-config-volume\") pod \"collect-profiles-29401365-tfbfx\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.385357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g4d2\" (UniqueName: \"kubernetes.io/projected/34fd65b2-a144-49bd-8e5d-5cc42a812348-kube-api-access-2g4d2\") pod \"collect-profiles-29401365-tfbfx\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.387631 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34fd65b2-a144-49bd-8e5d-5cc42a812348-config-volume\") pod \"collect-profiles-29401365-tfbfx\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.395503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34fd65b2-a144-49bd-8e5d-5cc42a812348-secret-volume\") pod \"collect-profiles-29401365-tfbfx\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.407176 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g4d2\" (UniqueName: \"kubernetes.io/projected/34fd65b2-a144-49bd-8e5d-5cc42a812348-kube-api-access-2g4d2\") pod \"collect-profiles-29401365-tfbfx\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.492879 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:00 crc kubenswrapper[4796]: I1125 14:45:00.972715 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx"] Nov 25 14:45:01 crc kubenswrapper[4796]: I1125 14:45:01.429761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" event={"ID":"34fd65b2-a144-49bd-8e5d-5cc42a812348","Type":"ContainerStarted","Data":"2a3dffae0355018ca6dcf6aec28247a98d0b290a0d871a11cf65bf96520f38bc"} Nov 25 14:45:02 crc kubenswrapper[4796]: I1125 14:45:02.439015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" event={"ID":"34fd65b2-a144-49bd-8e5d-5cc42a812348","Type":"ContainerStarted","Data":"13783ee45b22b874c27eabc4868a95fdde849ab4769e5e8d964da083f42995b0"} Nov 25 14:45:04 crc kubenswrapper[4796]: I1125 14:45:04.478329 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" podStartSLOduration=4.478305549 podStartE2EDuration="4.478305549s" podCreationTimestamp="2025-11-25 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:04.472165288 +0000 UTC m=+1232.815274752" watchObservedRunningTime="2025-11-25 14:45:04.478305549 +0000 UTC m=+1232.821415003" Nov 25 14:45:05 crc kubenswrapper[4796]: I1125 14:45:05.465379 4796 generic.go:334] "Generic (PLEG): container finished" podID="34fd65b2-a144-49bd-8e5d-5cc42a812348" containerID="13783ee45b22b874c27eabc4868a95fdde849ab4769e5e8d964da083f42995b0" exitCode=0 Nov 25 14:45:05 crc kubenswrapper[4796]: I1125 14:45:05.465430 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" event={"ID":"34fd65b2-a144-49bd-8e5d-5cc42a812348","Type":"ContainerDied","Data":"13783ee45b22b874c27eabc4868a95fdde849ab4769e5e8d964da083f42995b0"} Nov 25 14:45:06 crc kubenswrapper[4796]: I1125 14:45:06.777597 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:06 crc kubenswrapper[4796]: I1125 14:45:06.899651 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34fd65b2-a144-49bd-8e5d-5cc42a812348-secret-volume\") pod \"34fd65b2-a144-49bd-8e5d-5cc42a812348\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " Nov 25 14:45:06 crc kubenswrapper[4796]: I1125 14:45:06.899756 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34fd65b2-a144-49bd-8e5d-5cc42a812348-config-volume\") pod \"34fd65b2-a144-49bd-8e5d-5cc42a812348\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " Nov 25 14:45:06 crc kubenswrapper[4796]: I1125 14:45:06.899888 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g4d2\" (UniqueName: \"kubernetes.io/projected/34fd65b2-a144-49bd-8e5d-5cc42a812348-kube-api-access-2g4d2\") pod \"34fd65b2-a144-49bd-8e5d-5cc42a812348\" (UID: \"34fd65b2-a144-49bd-8e5d-5cc42a812348\") " Nov 25 14:45:06 crc kubenswrapper[4796]: I1125 14:45:06.900537 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34fd65b2-a144-49bd-8e5d-5cc42a812348-config-volume" (OuterVolumeSpecName: "config-volume") pod "34fd65b2-a144-49bd-8e5d-5cc42a812348" (UID: "34fd65b2-a144-49bd-8e5d-5cc42a812348"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:06 crc kubenswrapper[4796]: I1125 14:45:06.900773 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34fd65b2-a144-49bd-8e5d-5cc42a812348-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:06 crc kubenswrapper[4796]: I1125 14:45:06.905825 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fd65b2-a144-49bd-8e5d-5cc42a812348-kube-api-access-2g4d2" (OuterVolumeSpecName: "kube-api-access-2g4d2") pod "34fd65b2-a144-49bd-8e5d-5cc42a812348" (UID: "34fd65b2-a144-49bd-8e5d-5cc42a812348"). InnerVolumeSpecName "kube-api-access-2g4d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:06 crc kubenswrapper[4796]: I1125 14:45:06.905978 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fd65b2-a144-49bd-8e5d-5cc42a812348-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34fd65b2-a144-49bd-8e5d-5cc42a812348" (UID: "34fd65b2-a144-49bd-8e5d-5cc42a812348"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:07 crc kubenswrapper[4796]: I1125 14:45:07.002557 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34fd65b2-a144-49bd-8e5d-5cc42a812348-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:07 crc kubenswrapper[4796]: I1125 14:45:07.002611 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g4d2\" (UniqueName: \"kubernetes.io/projected/34fd65b2-a144-49bd-8e5d-5cc42a812348-kube-api-access-2g4d2\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:07 crc kubenswrapper[4796]: I1125 14:45:07.482334 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" event={"ID":"34fd65b2-a144-49bd-8e5d-5cc42a812348","Type":"ContainerDied","Data":"2a3dffae0355018ca6dcf6aec28247a98d0b290a0d871a11cf65bf96520f38bc"} Nov 25 14:45:07 crc kubenswrapper[4796]: I1125 14:45:07.482371 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a3dffae0355018ca6dcf6aec28247a98d0b290a0d871a11cf65bf96520f38bc" Nov 25 14:45:07 crc kubenswrapper[4796]: I1125 14:45:07.482417 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx" Nov 25 14:45:10 crc kubenswrapper[4796]: I1125 14:45:10.512100 4796 generic.go:334] "Generic (PLEG): container finished" podID="02b64285-eaa9-4677-aa4c-a16f0cffc2f8" containerID="9273adb8a7b2702f78b3ff186c214371c06a57b8d66d3d1ae12bc29558f29507" exitCode=0 Nov 25 14:45:10 crc kubenswrapper[4796]: I1125 14:45:10.512266 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-stpn7" event={"ID":"02b64285-eaa9-4677-aa4c-a16f0cffc2f8","Type":"ContainerDied","Data":"9273adb8a7b2702f78b3ff186c214371c06a57b8d66d3d1ae12bc29558f29507"} Nov 25 14:45:11 crc kubenswrapper[4796]: I1125 14:45:11.900301 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-stpn7" Nov 25 14:45:11 crc kubenswrapper[4796]: I1125 14:45:11.983959 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-combined-ca-bundle\") pod \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " Nov 25 14:45:11 crc kubenswrapper[4796]: I1125 14:45:11.984039 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-config-data\") pod \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " Nov 25 14:45:11 crc kubenswrapper[4796]: I1125 14:45:11.984255 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxlwh\" (UniqueName: \"kubernetes.io/projected/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-kube-api-access-sxlwh\") pod \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\" (UID: \"02b64285-eaa9-4677-aa4c-a16f0cffc2f8\") " Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.002399 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-kube-api-access-sxlwh" (OuterVolumeSpecName: "kube-api-access-sxlwh") pod "02b64285-eaa9-4677-aa4c-a16f0cffc2f8" (UID: "02b64285-eaa9-4677-aa4c-a16f0cffc2f8"). InnerVolumeSpecName "kube-api-access-sxlwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.027763 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-config-data" (OuterVolumeSpecName: "config-data") pod "02b64285-eaa9-4677-aa4c-a16f0cffc2f8" (UID: "02b64285-eaa9-4677-aa4c-a16f0cffc2f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.029341 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02b64285-eaa9-4677-aa4c-a16f0cffc2f8" (UID: "02b64285-eaa9-4677-aa4c-a16f0cffc2f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.087398 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxlwh\" (UniqueName: \"kubernetes.io/projected/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-kube-api-access-sxlwh\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.087457 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.087469 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b64285-eaa9-4677-aa4c-a16f0cffc2f8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.540180 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-stpn7" event={"ID":"02b64285-eaa9-4677-aa4c-a16f0cffc2f8","Type":"ContainerDied","Data":"b15790d647a4e92915b1ac8afdb7f8c5b1f9087897e899a11afb90952aa7d99f"} Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.540603 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15790d647a4e92915b1ac8afdb7f8c5b1f9087897e899a11afb90952aa7d99f" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.540216 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-stpn7" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.825151 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tvnmv"] Nov 25 14:45:12 crc kubenswrapper[4796]: E1125 14:45:12.828752 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fd65b2-a144-49bd-8e5d-5cc42a812348" containerName="collect-profiles" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.828796 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fd65b2-a144-49bd-8e5d-5cc42a812348" containerName="collect-profiles" Nov 25 14:45:12 crc kubenswrapper[4796]: E1125 14:45:12.828839 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b64285-eaa9-4677-aa4c-a16f0cffc2f8" containerName="keystone-db-sync" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.828850 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b64285-eaa9-4677-aa4c-a16f0cffc2f8" containerName="keystone-db-sync" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.829182 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b64285-eaa9-4677-aa4c-a16f0cffc2f8" containerName="keystone-db-sync" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.829219 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fd65b2-a144-49bd-8e5d-5cc42a812348" containerName="collect-profiles" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.830336 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.834159 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tvnmv"] Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.866208 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wlrzr"] Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.871997 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.878526 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.879191 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.879324 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.879517 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8z6zn" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.880532 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.908074 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wlrzr"] Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.910917 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.910956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn525\" (UniqueName: \"kubernetes.io/projected/9f95a77c-7ff5-46c6-8321-699193becf55-kube-api-access-rn525\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.910977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-fernet-keys\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.911010 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6q84\" (UniqueName: \"kubernetes.io/projected/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-kube-api-access-b6q84\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.911074 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-config\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.911091 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.911130 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-config-data\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.911179 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-scripts\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.911203 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.911223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-combined-ca-bundle\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.911241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-credential-keys\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:12 crc kubenswrapper[4796]: I1125 14:45:12.911259 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-svc\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.012000 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b5df6b75c-4v2jt"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.012908 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-config\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.014244 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.041757 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-config-data\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.041881 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-scripts\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.013308 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.041043 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tt8qv"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.043130 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.043175 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-combined-ca-bundle\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.043206 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-credential-keys\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.043229 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-svc\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.043265 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.043280 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn525\" (UniqueName: \"kubernetes.io/projected/9f95a77c-7ff5-46c6-8321-699193becf55-kube-api-access-rn525\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.043310 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-fernet-keys\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.043356 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6q84\" (UniqueName: \"kubernetes.io/projected/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-kube-api-access-b6q84\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.044297 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.014174 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-config\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.028902 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.047710 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.047762 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.048014 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.048222 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-svc\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.057873 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-scripts\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.058237 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-config-data\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.059117 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-fernet-keys\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.061662 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b5df6b75c-4v2jt"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.064246 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-bdh7z" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.064566 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.064820 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.065068 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jzp4t" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.065334 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.065691 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.070164 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-combined-ca-bundle\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.072392 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-credential-keys\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.079612 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tt8qv"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.088544 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn525\" (UniqueName: \"kubernetes.io/projected/9f95a77c-7ff5-46c6-8321-699193becf55-kube-api-access-rn525\") pod \"keystone-bootstrap-wlrzr\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.108411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6q84\" (UniqueName: \"kubernetes.io/projected/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-kube-api-access-b6q84\") pod \"dnsmasq-dns-5959f8865f-tvnmv\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.135426 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8fxjq"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.136513 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.143997 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.144092 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-p2rhf" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.144240 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.144943 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-db-sync-config-data\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.144977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-horizon-secret-key\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.145012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-combined-ca-bundle\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.145053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-scripts\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.145101 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-logs\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.145129 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6fmx\" (UniqueName: \"kubernetes.io/projected/b0493d28-3276-4a85-a800-4d0b1576c407-kube-api-access-c6fmx\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.145170 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0493d28-3276-4a85-a800-4d0b1576c407-etc-machine-id\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.145186 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84dq\" (UniqueName: \"kubernetes.io/projected/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-kube-api-access-c84dq\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.145215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-config-data\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.145262 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-scripts\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.145280 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-config-data\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.161986 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.244352 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249610 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-combined-ca-bundle\") pod \"neutron-db-sync-8fxjq\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-logs\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249697 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6fmx\" (UniqueName: \"kubernetes.io/projected/b0493d28-3276-4a85-a800-4d0b1576c407-kube-api-access-c6fmx\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249746 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0493d28-3276-4a85-a800-4d0b1576c407-etc-machine-id\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249766 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84dq\" (UniqueName: \"kubernetes.io/projected/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-kube-api-access-c84dq\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249793 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-config-data\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249813 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-scripts\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-config-data\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249866 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-db-sync-config-data\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-horizon-secret-key\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249909 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5qzl\" (UniqueName: \"kubernetes.io/projected/182a7451-724e-4649-a911-f26535ec04f9-kube-api-access-x5qzl\") pod \"neutron-db-sync-8fxjq\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249938 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-combined-ca-bundle\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249960 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-config\") pod \"neutron-db-sync-8fxjq\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.249989 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-scripts\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.251197 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-scripts\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.252660 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-logs\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.252725 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0493d28-3276-4a85-a800-4d0b1576c407-etc-machine-id\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.254511 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-config-data\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.262096 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-combined-ca-bundle\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.267634 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-scripts\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.294696 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8fxjq"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.302847 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-db-sync-config-data\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.303314 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-horizon-secret-key\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.304821 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-config-data\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.321457 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6fmx\" (UniqueName: \"kubernetes.io/projected/b0493d28-3276-4a85-a800-4d0b1576c407-kube-api-access-c6fmx\") pod \"cinder-db-sync-tt8qv\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.321554 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84dq\" (UniqueName: \"kubernetes.io/projected/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-kube-api-access-c84dq\") pod \"horizon-6b5df6b75c-4v2jt\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.355075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5qzl\" (UniqueName: \"kubernetes.io/projected/182a7451-724e-4649-a911-f26535ec04f9-kube-api-access-x5qzl\") pod \"neutron-db-sync-8fxjq\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.355166 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-config\") pod \"neutron-db-sync-8fxjq\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.355262 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-combined-ca-bundle\") pod \"neutron-db-sync-8fxjq\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.361624 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-config\") pod \"neutron-db-sync-8fxjq\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.363564 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-combined-ca-bundle\") pod \"neutron-db-sync-8fxjq\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.368824 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c7cb86f49-tsk8g"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.371830 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.381282 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5qzl\" (UniqueName: \"kubernetes.io/projected/182a7451-724e-4649-a911-f26535ec04f9-kube-api-access-x5qzl\") pod \"neutron-db-sync-8fxjq\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.383363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.390512 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.429725 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-vmgbv"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.431024 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.433654 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.434738 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rbpts" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.434888 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.453646 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tvnmv"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456374 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-config-data\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456470 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-scripts\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456542 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af3d6422-4023-40e2-92c3-ff9327a3ce5d-horizon-secret-key\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456597 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7x4x\" (UniqueName: \"kubernetes.io/projected/af3d6422-4023-40e2-92c3-ff9327a3ce5d-kube-api-access-j7x4x\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456617 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-config-data\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456646 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af3d6422-4023-40e2-92c3-ff9327a3ce5d-logs\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456666 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwkq8\" (UniqueName: \"kubernetes.io/projected/1e99260e-8b90-4cd0-8417-8dc3c142a743-kube-api-access-fwkq8\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456691 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e99260e-8b90-4cd0-8417-8dc3c142a743-logs\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456742 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-scripts\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.456775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-combined-ca-bundle\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.460446 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-ttn2n"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.461446 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.466028 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.466096 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6c45s" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.480288 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vmgbv"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.486831 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c7cb86f49-tsk8g"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.493020 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ttn2n"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.499429 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-45fs2"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.500939 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.506498 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.509458 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.514166 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.516141 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.524639 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.527641 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-45fs2"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.538480 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.558848 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-log-httpd\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.558889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af3d6422-4023-40e2-92c3-ff9327a3ce5d-horizon-secret-key\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.558912 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkjl4\" (UniqueName: \"kubernetes.io/projected/fa2e8489-181d-4c50-b9c5-484432e7e070-kube-api-access-nkjl4\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7x4x\" (UniqueName: \"kubernetes.io/projected/af3d6422-4023-40e2-92c3-ff9327a3ce5d-kube-api-access-j7x4x\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-config-data\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559170 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559192 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559222 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p9kt\" (UniqueName: \"kubernetes.io/projected/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-kube-api-access-7p9kt\") pod \"barbican-db-sync-ttn2n\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559244 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af3d6422-4023-40e2-92c3-ff9327a3ce5d-logs\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559266 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwkq8\" (UniqueName: \"kubernetes.io/projected/1e99260e-8b90-4cd0-8417-8dc3c142a743-kube-api-access-fwkq8\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559285 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-config\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e99260e-8b90-4cd0-8417-8dc3c142a743-logs\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.559341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-scripts\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.564552 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-scripts\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.564628 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5l42\" (UniqueName: \"kubernetes.io/projected/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-kube-api-access-v5l42\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.564950 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-run-httpd\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.564999 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-combined-ca-bundle\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.565028 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-config-data\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.565056 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-combined-ca-bundle\") pod \"barbican-db-sync-ttn2n\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.565119 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.565147 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-config-data\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.565195 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-scripts\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.565217 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-db-sync-config-data\") pod \"barbican-db-sync-ttn2n\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.565234 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.566322 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.566848 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af3d6422-4023-40e2-92c3-ff9327a3ce5d-logs\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.568780 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e99260e-8b90-4cd0-8417-8dc3c142a743-logs\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.570482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-scripts\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.577849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af3d6422-4023-40e2-92c3-ff9327a3ce5d-horizon-secret-key\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.578882 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-config-data\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.579296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-config-data\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.584090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-scripts\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.586479 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-combined-ca-bundle\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.605359 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7x4x\" (UniqueName: \"kubernetes.io/projected/af3d6422-4023-40e2-92c3-ff9327a3ce5d-kube-api-access-j7x4x\") pod \"horizon-7c7cb86f49-tsk8g\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.607654 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwkq8\" (UniqueName: \"kubernetes.io/projected/1e99260e-8b90-4cd0-8417-8dc3c142a743-kube-api-access-fwkq8\") pod \"placement-db-sync-vmgbv\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669429 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-db-sync-config-data\") pod \"barbican-db-sync-ttn2n\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669473 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669505 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669530 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-log-httpd\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669562 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkjl4\" (UniqueName: \"kubernetes.io/projected/fa2e8489-181d-4c50-b9c5-484432e7e070-kube-api-access-nkjl4\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669633 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669652 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669669 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9kt\" (UniqueName: \"kubernetes.io/projected/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-kube-api-access-7p9kt\") pod \"barbican-db-sync-ttn2n\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669710 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-config\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669757 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-scripts\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669774 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5l42\" (UniqueName: \"kubernetes.io/projected/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-kube-api-access-v5l42\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669808 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-run-httpd\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669828 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-combined-ca-bundle\") pod \"barbican-db-sync-ttn2n\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669848 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.669867 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-config-data\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.673742 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-config-data\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.674642 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-config\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.677429 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-scripts\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.678149 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-run-httpd\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.678428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-db-sync-config-data\") pod \"barbican-db-sync-ttn2n\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.679291 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.680051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.680359 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-log-httpd\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.680504 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-combined-ca-bundle\") pod \"barbican-db-sync-ttn2n\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.681365 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.683281 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.685473 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.687769 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.691955 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p9kt\" (UniqueName: \"kubernetes.io/projected/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-kube-api-access-7p9kt\") pod \"barbican-db-sync-ttn2n\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.693155 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5l42\" (UniqueName: \"kubernetes.io/projected/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-kube-api-access-v5l42\") pod \"ceilometer-0\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " pod="openstack/ceilometer-0" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.694851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkjl4\" (UniqueName: \"kubernetes.io/projected/fa2e8489-181d-4c50-b9c5-484432e7e070-kube-api-access-nkjl4\") pod \"dnsmasq-dns-58dd9ff6bc-45fs2\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.718156 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.875163 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.914209 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.924542 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:13 crc kubenswrapper[4796]: I1125 14:45:13.950774 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.030407 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wlrzr"] Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.039523 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tvnmv"] Nov 25 14:45:14 crc kubenswrapper[4796]: W1125 14:45:14.076867 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f95a77c_7ff5_46c6_8321_699193becf55.slice/crio-71b8f1d5c7e091b2bf384e2461665c4628c2b4fee590ccfc66f62b213552b126 WatchSource:0}: Error finding container 71b8f1d5c7e091b2bf384e2461665c4628c2b4fee590ccfc66f62b213552b126: Status 404 returned error can't find the container with id 71b8f1d5c7e091b2bf384e2461665c4628c2b4fee590ccfc66f62b213552b126 Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.216691 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tt8qv"] Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.299450 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b5df6b75c-4v2jt"] Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.313251 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8fxjq"] Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.477131 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c7cb86f49-tsk8g"] Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.554913 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vmgbv"] Nov 25 14:45:14 crc kubenswrapper[4796]: W1125 14:45:14.589708 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e99260e_8b90_4cd0_8417_8dc3c142a743.slice/crio-dc8e2cbc5e596fc06fe737a65646969b3176cb69019192d35321c7fa9edac52d WatchSource:0}: Error finding container dc8e2cbc5e596fc06fe737a65646969b3176cb69019192d35321c7fa9edac52d: Status 404 returned error can't find the container with id dc8e2cbc5e596fc06fe737a65646969b3176cb69019192d35321c7fa9edac52d Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.591495 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8fxjq" event={"ID":"182a7451-724e-4649-a911-f26535ec04f9","Type":"ContainerStarted","Data":"8b93e0722e2474e138a91a3dcdec2aa677bd60bfeb0bc4f8fdf777b3527f8503"} Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.596285 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tt8qv" event={"ID":"b0493d28-3276-4a85-a800-4d0b1576c407","Type":"ContainerStarted","Data":"a55c80ed56705fa77b41622708791b00e8577d16f50aed9a671e062dab269c28"} Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.599103 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b6bffb6-e9d9-4c64-b838-83c810fe14a3" containerID="4ff27ae1bed21ac574be527b1cdf05757647dbc12b94decfd7163cd54ffc3150" exitCode=0 Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.599154 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" event={"ID":"9b6bffb6-e9d9-4c64-b838-83c810fe14a3","Type":"ContainerDied","Data":"4ff27ae1bed21ac574be527b1cdf05757647dbc12b94decfd7163cd54ffc3150"} Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.599177 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" event={"ID":"9b6bffb6-e9d9-4c64-b838-83c810fe14a3","Type":"ContainerStarted","Data":"345d0b38f84d5b3f7d1467208c97228a5f957b4a5bc132ef52ec63b9f16ec95c"} Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.605866 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5df6b75c-4v2jt" event={"ID":"95bcc316-2c6f-41e2-bfea-e7fae75cacfa","Type":"ContainerStarted","Data":"fb6a2a55de834f405671a16523d250b4732f16e42890e433ea5990474bb1db87"} Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.610305 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c7cb86f49-tsk8g" event={"ID":"af3d6422-4023-40e2-92c3-ff9327a3ce5d","Type":"ContainerStarted","Data":"f70994a2132e4a835229a4c93d97c8d240907cc3bd64d59631504cbd075cdf6a"} Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.618339 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wlrzr" event={"ID":"9f95a77c-7ff5-46c6-8321-699193becf55","Type":"ContainerStarted","Data":"043b186c4b59ec956c6acb2750a19721e8aeeb3f5ce985bffffd8f6878b862e2"} Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.618363 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wlrzr" event={"ID":"9f95a77c-7ff5-46c6-8321-699193becf55","Type":"ContainerStarted","Data":"71b8f1d5c7e091b2bf384e2461665c4628c2b4fee590ccfc66f62b213552b126"} Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.722201 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wlrzr" podStartSLOduration=2.722175671 podStartE2EDuration="2.722175671s" podCreationTimestamp="2025-11-25 14:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:14.717019071 +0000 UTC m=+1243.060128485" watchObservedRunningTime="2025-11-25 14:45:14.722175671 +0000 UTC m=+1243.065285095" Nov 25 14:45:14 crc kubenswrapper[4796]: W1125 14:45:14.759370 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2e8489_181d_4c50_b9c5_484432e7e070.slice/crio-b99932f691c72032a4a04e2846b72bb600853850c8553989e1cb85f97cee0d9e WatchSource:0}: Error finding container b99932f691c72032a4a04e2846b72bb600853850c8553989e1cb85f97cee0d9e: Status 404 returned error can't find the container with id b99932f691c72032a4a04e2846b72bb600853850c8553989e1cb85f97cee0d9e Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.766161 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-45fs2"] Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.815899 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:45:14 crc kubenswrapper[4796]: I1125 14:45:14.860930 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ttn2n"] Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.028137 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.164019 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-sb\") pod \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.164410 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-config\") pod \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.164535 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-nb\") pod \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.164651 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6q84\" (UniqueName: \"kubernetes.io/projected/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-kube-api-access-b6q84\") pod \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.164732 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-svc\") pod \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.164775 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-swift-storage-0\") pod \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\" (UID: \"9b6bffb6-e9d9-4c64-b838-83c810fe14a3\") " Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.187833 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-kube-api-access-b6q84" (OuterVolumeSpecName: "kube-api-access-b6q84") pod "9b6bffb6-e9d9-4c64-b838-83c810fe14a3" (UID: "9b6bffb6-e9d9-4c64-b838-83c810fe14a3"). InnerVolumeSpecName "kube-api-access-b6q84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.210796 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b6bffb6-e9d9-4c64-b838-83c810fe14a3" (UID: "9b6bffb6-e9d9-4c64-b838-83c810fe14a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.211203 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b6bffb6-e9d9-4c64-b838-83c810fe14a3" (UID: "9b6bffb6-e9d9-4c64-b838-83c810fe14a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.230490 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b6bffb6-e9d9-4c64-b838-83c810fe14a3" (UID: "9b6bffb6-e9d9-4c64-b838-83c810fe14a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.273533 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.273667 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.273676 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6q84\" (UniqueName: \"kubernetes.io/projected/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-kube-api-access-b6q84\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.273686 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.286174 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9b6bffb6-e9d9-4c64-b838-83c810fe14a3" (UID: "9b6bffb6-e9d9-4c64-b838-83c810fe14a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.288206 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b5df6b75c-4v2jt"] Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.289336 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-config" (OuterVolumeSpecName: "config") pod "9b6bffb6-e9d9-4c64-b838-83c810fe14a3" (UID: "9b6bffb6-e9d9-4c64-b838-83c810fe14a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.321118 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.339351 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cf8d5f86f-2vlzd"] Nov 25 14:45:15 crc kubenswrapper[4796]: E1125 14:45:15.339787 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6bffb6-e9d9-4c64-b838-83c810fe14a3" containerName="init" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.339811 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6bffb6-e9d9-4c64-b838-83c810fe14a3" containerName="init" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.340013 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6bffb6-e9d9-4c64-b838-83c810fe14a3" containerName="init" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.340939 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.353701 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf8d5f86f-2vlzd"] Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.380599 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.380625 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6bffb6-e9d9-4c64-b838-83c810fe14a3-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.481544 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e628aee-a53e-4a15-860c-90f3b47de705-logs\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.481708 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e628aee-a53e-4a15-860c-90f3b47de705-horizon-secret-key\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.481911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-scripts\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.481949 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-config-data\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.481995 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vnb\" (UniqueName: \"kubernetes.io/projected/9e628aee-a53e-4a15-860c-90f3b47de705-kube-api-access-j2vnb\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.583163 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-config-data\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.583230 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2vnb\" (UniqueName: \"kubernetes.io/projected/9e628aee-a53e-4a15-860c-90f3b47de705-kube-api-access-j2vnb\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.583285 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e628aee-a53e-4a15-860c-90f3b47de705-logs\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.583323 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e628aee-a53e-4a15-860c-90f3b47de705-horizon-secret-key\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.583380 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-scripts\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.584181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-scripts\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.584716 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e628aee-a53e-4a15-860c-90f3b47de705-logs\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.585490 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-config-data\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.588165 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e628aee-a53e-4a15-860c-90f3b47de705-horizon-secret-key\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.607010 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2vnb\" (UniqueName: \"kubernetes.io/projected/9e628aee-a53e-4a15-860c-90f3b47de705-kube-api-access-j2vnb\") pod \"horizon-5cf8d5f86f-2vlzd\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.633437 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerStarted","Data":"4c448645f752cf853a5c3cbc16b34368d5d6d69fbdac9ac1928e989811d7408a"} Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.637123 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8fxjq" event={"ID":"182a7451-724e-4649-a911-f26535ec04f9","Type":"ContainerStarted","Data":"b5f9a3f112bf73deb92d34f9303693dd31b99f2aa0bf345c73078397cc705f6e"} Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.640143 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttn2n" event={"ID":"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2","Type":"ContainerStarted","Data":"b613cf464a236546766f8092da4e2c4b310c0d16185a2b7772d07180768e4bf0"} Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.643808 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" event={"ID":"9b6bffb6-e9d9-4c64-b838-83c810fe14a3","Type":"ContainerDied","Data":"345d0b38f84d5b3f7d1467208c97228a5f957b4a5bc132ef52ec63b9f16ec95c"} Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.643853 4796 scope.go:117] "RemoveContainer" containerID="4ff27ae1bed21ac574be527b1cdf05757647dbc12b94decfd7163cd54ffc3150" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.645052 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-tvnmv" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.653414 4796 generic.go:334] "Generic (PLEG): container finished" podID="fa2e8489-181d-4c50-b9c5-484432e7e070" containerID="ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271" exitCode=0 Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.653475 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" event={"ID":"fa2e8489-181d-4c50-b9c5-484432e7e070","Type":"ContainerDied","Data":"ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271"} Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.653500 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" event={"ID":"fa2e8489-181d-4c50-b9c5-484432e7e070","Type":"ContainerStarted","Data":"b99932f691c72032a4a04e2846b72bb600853850c8553989e1cb85f97cee0d9e"} Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.655239 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-8fxjq" podStartSLOduration=2.655216749 podStartE2EDuration="2.655216749s" podCreationTimestamp="2025-11-25 14:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:15.652513455 +0000 UTC m=+1243.995622889" watchObservedRunningTime="2025-11-25 14:45:15.655216749 +0000 UTC m=+1243.998326173" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.657553 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vmgbv" event={"ID":"1e99260e-8b90-4cd0-8417-8dc3c142a743","Type":"ContainerStarted","Data":"dc8e2cbc5e596fc06fe737a65646969b3176cb69019192d35321c7fa9edac52d"} Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.715484 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.758969 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tvnmv"] Nov 25 14:45:15 crc kubenswrapper[4796]: I1125 14:45:15.767653 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-tvnmv"] Nov 25 14:45:16 crc kubenswrapper[4796]: I1125 14:45:16.295014 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cf8d5f86f-2vlzd"] Nov 25 14:45:16 crc kubenswrapper[4796]: W1125 14:45:16.317247 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e628aee_a53e_4a15_860c_90f3b47de705.slice/crio-d7d28c73457d622b503dc0ed4404f49d202f07345035c3a64fbfcfe450f4615c WatchSource:0}: Error finding container d7d28c73457d622b503dc0ed4404f49d202f07345035c3a64fbfcfe450f4615c: Status 404 returned error can't find the container with id d7d28c73457d622b503dc0ed4404f49d202f07345035c3a64fbfcfe450f4615c Nov 25 14:45:16 crc kubenswrapper[4796]: I1125 14:45:16.424095 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b6bffb6-e9d9-4c64-b838-83c810fe14a3" path="/var/lib/kubelet/pods/9b6bffb6-e9d9-4c64-b838-83c810fe14a3/volumes" Nov 25 14:45:16 crc kubenswrapper[4796]: I1125 14:45:16.675993 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" event={"ID":"fa2e8489-181d-4c50-b9c5-484432e7e070","Type":"ContainerStarted","Data":"5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527"} Nov 25 14:45:16 crc kubenswrapper[4796]: I1125 14:45:16.676149 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:16 crc kubenswrapper[4796]: I1125 14:45:16.679246 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf8d5f86f-2vlzd" event={"ID":"9e628aee-a53e-4a15-860c-90f3b47de705","Type":"ContainerStarted","Data":"d7d28c73457d622b503dc0ed4404f49d202f07345035c3a64fbfcfe450f4615c"} Nov 25 14:45:16 crc kubenswrapper[4796]: I1125 14:45:16.722046 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" podStartSLOduration=3.722020157 podStartE2EDuration="3.722020157s" podCreationTimestamp="2025-11-25 14:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:16.695055246 +0000 UTC m=+1245.038164680" watchObservedRunningTime="2025-11-25 14:45:16.722020157 +0000 UTC m=+1245.065129581" Nov 25 14:45:20 crc kubenswrapper[4796]: I1125 14:45:20.720888 4796 generic.go:334] "Generic (PLEG): container finished" podID="9f95a77c-7ff5-46c6-8321-699193becf55" containerID="043b186c4b59ec956c6acb2750a19721e8aeeb3f5ce985bffffd8f6878b862e2" exitCode=0 Nov 25 14:45:20 crc kubenswrapper[4796]: I1125 14:45:20.721238 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wlrzr" event={"ID":"9f95a77c-7ff5-46c6-8321-699193becf55","Type":"ContainerDied","Data":"043b186c4b59ec956c6acb2750a19721e8aeeb3f5ce985bffffd8f6878b862e2"} Nov 25 14:45:21 crc kubenswrapper[4796]: E1125 14:45:21.230389 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3947d76_dff0_44d7_9b86_d2a0406db500.slice/crio-conmon-3d3e0703fe13c0316f521fa3c9dd09f327a2ea1690e7152d4c9b7354c6c0c159.scope\": RecentStats: unable to find data in memory cache]" Nov 25 14:45:21 crc kubenswrapper[4796]: I1125 14:45:21.730719 4796 generic.go:334] "Generic (PLEG): container finished" podID="d3947d76-dff0-44d7-9b86-d2a0406db500" containerID="3d3e0703fe13c0316f521fa3c9dd09f327a2ea1690e7152d4c9b7354c6c0c159" exitCode=0 Nov 25 14:45:21 crc kubenswrapper[4796]: I1125 14:45:21.730804 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kddwd" event={"ID":"d3947d76-dff0-44d7-9b86-d2a0406db500","Type":"ContainerDied","Data":"3d3e0703fe13c0316f521fa3c9dd09f327a2ea1690e7152d4c9b7354c6c0c159"} Nov 25 14:45:21 crc kubenswrapper[4796]: I1125 14:45:21.927504 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c7cb86f49-tsk8g"] Nov 25 14:45:21 crc kubenswrapper[4796]: I1125 14:45:21.975887 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cd9956864-5xkx5"] Nov 25 14:45:21 crc kubenswrapper[4796]: I1125 14:45:21.977851 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:21 crc kubenswrapper[4796]: I1125 14:45:21.981966 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 14:45:21 crc kubenswrapper[4796]: I1125 14:45:21.982809 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cd9956864-5xkx5"] Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.048236 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cf8d5f86f-2vlzd"] Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.075471 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-674489f5b-nnl97"] Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.077115 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.084303 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-674489f5b-nnl97"] Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.135135 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23942f6c-a777-4b11-a51d-ccaee1fff6e7-logs\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.135606 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-tls-certs\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.135668 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-secret-key\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.135732 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j79g\" (UniqueName: \"kubernetes.io/projected/23942f6c-a777-4b11-a51d-ccaee1fff6e7-kube-api-access-7j79g\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.135877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-config-data\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.135982 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-combined-ca-bundle\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.136318 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-scripts\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238490 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-tls-certs\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-secret-key\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238593 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j79g\" (UniqueName: \"kubernetes.io/projected/23942f6c-a777-4b11-a51d-ccaee1fff6e7-kube-api-access-7j79g\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238646 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-config-data\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238679 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-combined-ca-bundle\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238709 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f52433-dd17-499e-8ac4-bda250a52460-horizon-tls-certs\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238731 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f52433-dd17-499e-8ac4-bda250a52460-combined-ca-bundle\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238808 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gblx\" (UniqueName: \"kubernetes.io/projected/b8f52433-dd17-499e-8ac4-bda250a52460-kube-api-access-7gblx\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238843 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-scripts\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238865 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8f52433-dd17-499e-8ac4-bda250a52460-horizon-secret-key\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238901 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8f52433-dd17-499e-8ac4-bda250a52460-scripts\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f52433-dd17-499e-8ac4-bda250a52460-config-data\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238958 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23942f6c-a777-4b11-a51d-ccaee1fff6e7-logs\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.238986 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f52433-dd17-499e-8ac4-bda250a52460-logs\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.239948 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-scripts\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.240226 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23942f6c-a777-4b11-a51d-ccaee1fff6e7-logs\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.241062 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-config-data\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.245165 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-combined-ca-bundle\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.246189 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-secret-key\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.254529 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-tls-certs\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.256440 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j79g\" (UniqueName: \"kubernetes.io/projected/23942f6c-a777-4b11-a51d-ccaee1fff6e7-kube-api-access-7j79g\") pod \"horizon-7cd9956864-5xkx5\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.314533 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.340919 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gblx\" (UniqueName: \"kubernetes.io/projected/b8f52433-dd17-499e-8ac4-bda250a52460-kube-api-access-7gblx\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.340975 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8f52433-dd17-499e-8ac4-bda250a52460-horizon-secret-key\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.341013 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8f52433-dd17-499e-8ac4-bda250a52460-scripts\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.341042 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f52433-dd17-499e-8ac4-bda250a52460-config-data\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.341074 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f52433-dd17-499e-8ac4-bda250a52460-logs\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.341172 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f52433-dd17-499e-8ac4-bda250a52460-horizon-tls-certs\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.341197 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f52433-dd17-499e-8ac4-bda250a52460-combined-ca-bundle\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.341969 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f52433-dd17-499e-8ac4-bda250a52460-logs\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.342508 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8f52433-dd17-499e-8ac4-bda250a52460-scripts\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.345028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8f52433-dd17-499e-8ac4-bda250a52460-horizon-secret-key\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.345640 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f52433-dd17-499e-8ac4-bda250a52460-horizon-tls-certs\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.346125 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f52433-dd17-499e-8ac4-bda250a52460-combined-ca-bundle\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.348123 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f52433-dd17-499e-8ac4-bda250a52460-config-data\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.362347 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gblx\" (UniqueName: \"kubernetes.io/projected/b8f52433-dd17-499e-8ac4-bda250a52460-kube-api-access-7gblx\") pod \"horizon-674489f5b-nnl97\" (UID: \"b8f52433-dd17-499e-8ac4-bda250a52460\") " pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:22 crc kubenswrapper[4796]: I1125 14:45:22.414034 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:23 crc kubenswrapper[4796]: I1125 14:45:23.926734 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:24 crc kubenswrapper[4796]: I1125 14:45:24.004662 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-79zq7"] Nov 25 14:45:24 crc kubenswrapper[4796]: I1125 14:45:24.005899 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" podUID="821175f1-a773-4def-b744-22423894346c" containerName="dnsmasq-dns" containerID="cri-o://c9e9dcd9e4f1f98516f78c5ee8e84d685accb67b14f1d0d698a9cfca628fd119" gracePeriod=10 Nov 25 14:45:24 crc kubenswrapper[4796]: I1125 14:45:24.775174 4796 generic.go:334] "Generic (PLEG): container finished" podID="821175f1-a773-4def-b744-22423894346c" containerID="c9e9dcd9e4f1f98516f78c5ee8e84d685accb67b14f1d0d698a9cfca628fd119" exitCode=0 Nov 25 14:45:24 crc kubenswrapper[4796]: I1125 14:45:24.775217 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" event={"ID":"821175f1-a773-4def-b744-22423894346c","Type":"ContainerDied","Data":"c9e9dcd9e4f1f98516f78c5ee8e84d685accb67b14f1d0d698a9cfca628fd119"} Nov 25 14:45:28 crc kubenswrapper[4796]: I1125 14:45:28.519954 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" podUID="821175f1-a773-4def-b744-22423894346c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Nov 25 14:45:30 crc kubenswrapper[4796]: E1125 14:45:30.789156 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 25 14:45:30 crc kubenswrapper[4796]: E1125 14:45:30.790251 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n674h5c7h55fh554h5cbh9fh67dh584h7dh588h579hc9h569h698h98hd8h5b7h577h58bh689h687h648hbfh554h5b6h59dhcch594h556h5bh8bh7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2vnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5cf8d5f86f-2vlzd_openstack(9e628aee-a53e-4a15-860c-90f3b47de705): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:45:30 crc kubenswrapper[4796]: E1125 14:45:30.791833 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 25 14:45:30 crc kubenswrapper[4796]: E1125 14:45:30.792133 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n657h569h7ch99h67hd8h5c6h5bch54dh554h59dh94h84h569h566h57ch566h5d9hf4h669h75h5fh568h64ch5b4h9fh696h74h547hb7h7bh6cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7x4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7c7cb86f49-tsk8g_openstack(af3d6422-4023-40e2-92c3-ff9327a3ce5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:45:30 crc kubenswrapper[4796]: E1125 14:45:30.793080 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5cf8d5f86f-2vlzd" podUID="9e628aee-a53e-4a15-860c-90f3b47de705" Nov 25 14:45:30 crc kubenswrapper[4796]: E1125 14:45:30.794243 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7c7cb86f49-tsk8g" podUID="af3d6422-4023-40e2-92c3-ff9327a3ce5d" Nov 25 14:45:30 crc kubenswrapper[4796]: E1125 14:45:30.826413 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 25 14:45:30 crc kubenswrapper[4796]: E1125 14:45:30.826668 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n557h669h5b9h5b6h5d4h5bbh578h5dbh5dh9bh5bfh644h557h5dfhdch558hdh557hcfh5fdhbbh89hc8h5fdh64bh586h665h57h88h59h664h5dbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c84dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6b5df6b75c-4v2jt_openstack(95bcc316-2c6f-41e2-bfea-e7fae75cacfa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:45:30 crc kubenswrapper[4796]: E1125 14:45:30.828820 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6b5df6b75c-4v2jt" podUID="95bcc316-2c6f-41e2-bfea-e7fae75cacfa" Nov 25 14:45:30 crc kubenswrapper[4796]: I1125 14:45:30.830885 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kddwd" event={"ID":"d3947d76-dff0-44d7-9b86-d2a0406db500","Type":"ContainerDied","Data":"f50ee0fbaba9e9b0de548d9dce9f34b0bbe1af7e0b066be0cd81d2be23357b85"} Nov 25 14:45:30 crc kubenswrapper[4796]: I1125 14:45:30.830958 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50ee0fbaba9e9b0de548d9dce9f34b0bbe1af7e0b066be0cd81d2be23357b85" Nov 25 14:45:30 crc kubenswrapper[4796]: I1125 14:45:30.833991 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wlrzr" event={"ID":"9f95a77c-7ff5-46c6-8321-699193becf55","Type":"ContainerDied","Data":"71b8f1d5c7e091b2bf384e2461665c4628c2b4fee590ccfc66f62b213552b126"} Nov 25 14:45:30 crc kubenswrapper[4796]: I1125 14:45:30.834057 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71b8f1d5c7e091b2bf384e2461665c4628c2b4fee590ccfc66f62b213552b126" Nov 25 14:45:30 crc kubenswrapper[4796]: I1125 14:45:30.918511 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:30 crc kubenswrapper[4796]: I1125 14:45:30.923331 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kddwd" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.006765 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-fernet-keys\") pod \"9f95a77c-7ff5-46c6-8321-699193becf55\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.006828 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-combined-ca-bundle\") pod \"d3947d76-dff0-44d7-9b86-d2a0406db500\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.006865 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-config-data\") pod \"9f95a77c-7ff5-46c6-8321-699193becf55\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.006891 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-combined-ca-bundle\") pod \"9f95a77c-7ff5-46c6-8321-699193becf55\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.006992 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn525\" (UniqueName: \"kubernetes.io/projected/9f95a77c-7ff5-46c6-8321-699193becf55-kube-api-access-rn525\") pod \"9f95a77c-7ff5-46c6-8321-699193becf55\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.007010 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-scripts\") pod \"9f95a77c-7ff5-46c6-8321-699193becf55\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.007045 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-credential-keys\") pod \"9f95a77c-7ff5-46c6-8321-699193becf55\" (UID: \"9f95a77c-7ff5-46c6-8321-699193becf55\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.007077 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvfvq\" (UniqueName: \"kubernetes.io/projected/d3947d76-dff0-44d7-9b86-d2a0406db500-kube-api-access-dvfvq\") pod \"d3947d76-dff0-44d7-9b86-d2a0406db500\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.007100 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-config-data\") pod \"d3947d76-dff0-44d7-9b86-d2a0406db500\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.007126 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-db-sync-config-data\") pod \"d3947d76-dff0-44d7-9b86-d2a0406db500\" (UID: \"d3947d76-dff0-44d7-9b86-d2a0406db500\") " Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.013636 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f95a77c-7ff5-46c6-8321-699193becf55-kube-api-access-rn525" (OuterVolumeSpecName: "kube-api-access-rn525") pod "9f95a77c-7ff5-46c6-8321-699193becf55" (UID: "9f95a77c-7ff5-46c6-8321-699193becf55"). InnerVolumeSpecName "kube-api-access-rn525". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.014928 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-scripts" (OuterVolumeSpecName: "scripts") pod "9f95a77c-7ff5-46c6-8321-699193becf55" (UID: "9f95a77c-7ff5-46c6-8321-699193becf55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.016016 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d3947d76-dff0-44d7-9b86-d2a0406db500" (UID: "d3947d76-dff0-44d7-9b86-d2a0406db500"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.017804 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3947d76-dff0-44d7-9b86-d2a0406db500-kube-api-access-dvfvq" (OuterVolumeSpecName: "kube-api-access-dvfvq") pod "d3947d76-dff0-44d7-9b86-d2a0406db500" (UID: "d3947d76-dff0-44d7-9b86-d2a0406db500"). InnerVolumeSpecName "kube-api-access-dvfvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.029869 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9f95a77c-7ff5-46c6-8321-699193becf55" (UID: "9f95a77c-7ff5-46c6-8321-699193becf55"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.031611 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9f95a77c-7ff5-46c6-8321-699193becf55" (UID: "9f95a77c-7ff5-46c6-8321-699193becf55"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.050035 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-config-data" (OuterVolumeSpecName: "config-data") pod "9f95a77c-7ff5-46c6-8321-699193becf55" (UID: "9f95a77c-7ff5-46c6-8321-699193becf55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.076444 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f95a77c-7ff5-46c6-8321-699193becf55" (UID: "9f95a77c-7ff5-46c6-8321-699193becf55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.079866 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-config-data" (OuterVolumeSpecName: "config-data") pod "d3947d76-dff0-44d7-9b86-d2a0406db500" (UID: "d3947d76-dff0-44d7-9b86-d2a0406db500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.108743 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn525\" (UniqueName: \"kubernetes.io/projected/9f95a77c-7ff5-46c6-8321-699193becf55-kube-api-access-rn525\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.108774 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.108785 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.108793 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvfvq\" (UniqueName: \"kubernetes.io/projected/d3947d76-dff0-44d7-9b86-d2a0406db500-kube-api-access-dvfvq\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.108801 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.108810 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.108818 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.108826 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.108835 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f95a77c-7ff5-46c6-8321-699193becf55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.110684 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3947d76-dff0-44d7-9b86-d2a0406db500" (UID: "d3947d76-dff0-44d7-9b86-d2a0406db500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.210798 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3947d76-dff0-44d7-9b86-d2a0406db500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.842422 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kddwd" Nov 25 14:45:31 crc kubenswrapper[4796]: I1125 14:45:31.842427 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wlrzr" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.015777 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wlrzr"] Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.025651 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wlrzr"] Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.120586 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-n86kb"] Nov 25 14:45:32 crc kubenswrapper[4796]: E1125 14:45:32.120985 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f95a77c-7ff5-46c6-8321-699193becf55" containerName="keystone-bootstrap" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.121002 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f95a77c-7ff5-46c6-8321-699193becf55" containerName="keystone-bootstrap" Nov 25 14:45:32 crc kubenswrapper[4796]: E1125 14:45:32.121034 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3947d76-dff0-44d7-9b86-d2a0406db500" containerName="glance-db-sync" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.121039 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3947d76-dff0-44d7-9b86-d2a0406db500" containerName="glance-db-sync" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.121227 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3947d76-dff0-44d7-9b86-d2a0406db500" containerName="glance-db-sync" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.121243 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f95a77c-7ff5-46c6-8321-699193becf55" containerName="keystone-bootstrap" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.121808 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.124025 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.124173 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.124256 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.124268 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.124480 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8z6zn" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.134337 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n86kb"] Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.233041 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-scripts\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.233150 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-credential-keys\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.233200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skx5m\" (UniqueName: \"kubernetes.io/projected/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-kube-api-access-skx5m\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.233345 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-combined-ca-bundle\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.233415 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-config-data\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.233602 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-fernet-keys\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.327994 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d58n6"] Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.329427 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.335024 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-combined-ca-bundle\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.335066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-config-data\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.335125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-fernet-keys\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.335174 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-scripts\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.335225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-credential-keys\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.335249 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skx5m\" (UniqueName: \"kubernetes.io/projected/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-kube-api-access-skx5m\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.339025 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.339275 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.339342 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d58n6"] Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.339390 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.350334 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-combined-ca-bundle\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.352518 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-credential-keys\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.354935 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-scripts\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.359790 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-config-data\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.367196 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-fernet-keys\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.373166 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skx5m\" (UniqueName: \"kubernetes.io/projected/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-kube-api-access-skx5m\") pod \"keystone-bootstrap-n86kb\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.420972 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f95a77c-7ff5-46c6-8321-699193becf55" path="/var/lib/kubelet/pods/9f95a77c-7ff5-46c6-8321-699193becf55/volumes" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.438845 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.438906 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.438996 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.439032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.439077 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-config\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.439093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ppq4\" (UniqueName: \"kubernetes.io/projected/64d653a7-a2c8-439a-9b7c-733682c79eeb-kube-api-access-2ppq4\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.449371 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8z6zn" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.458707 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.541540 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.541626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-config\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.541645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ppq4\" (UniqueName: \"kubernetes.io/projected/64d653a7-a2c8-439a-9b7c-733682c79eeb-kube-api-access-2ppq4\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.541716 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.541753 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.541788 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.542934 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.543101 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.543420 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-config\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.544235 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.546636 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.577320 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ppq4\" (UniqueName: \"kubernetes.io/projected/64d653a7-a2c8-439a-9b7c-733682c79eeb-kube-api-access-2ppq4\") pod \"dnsmasq-dns-785d8bcb8c-d58n6\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:32 crc kubenswrapper[4796]: I1125 14:45:32.859449 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.243842 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.246230 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.248600 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.252324 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.253756 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.253860 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dxx7v" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.368949 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-scripts\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.369028 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-logs\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.369105 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.369657 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.370697 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.370821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnrbj\" (UniqueName: \"kubernetes.io/projected/2cc287c5-aa10-4895-96ec-98cce7ff2f63-kube-api-access-rnrbj\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.370880 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-config-data\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.472798 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.472890 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnrbj\" (UniqueName: \"kubernetes.io/projected/2cc287c5-aa10-4895-96ec-98cce7ff2f63-kube-api-access-rnrbj\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.472930 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-config-data\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.472977 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-scripts\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.472994 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-logs\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.473032 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.473096 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.474790 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.475155 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.475220 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-logs\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.479650 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-scripts\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.481458 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-config-data\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.491251 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.502772 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnrbj\" (UniqueName: \"kubernetes.io/projected/2cc287c5-aa10-4895-96ec-98cce7ff2f63-kube-api-access-rnrbj\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.509812 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.520087 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" podUID="821175f1-a773-4def-b744-22423894346c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.572665 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.599791 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.606159 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.608455 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.633225 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.778290 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.778364 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.778423 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.778522 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.778668 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klgvn\" (UniqueName: \"kubernetes.io/projected/32891459-a961-4f3a-9820-5eb167599bd9-kube-api-access-klgvn\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.778765 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.778830 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.880810 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.880913 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klgvn\" (UniqueName: \"kubernetes.io/projected/32891459-a961-4f3a-9820-5eb167599bd9-kube-api-access-klgvn\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.880956 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.880977 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.881028 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.881046 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.881071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.881097 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.881595 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.882423 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.885734 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.886807 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.889851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.904778 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klgvn\" (UniqueName: \"kubernetes.io/projected/32891459-a961-4f3a-9820-5eb167599bd9-kube-api-access-klgvn\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.930931 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:33 crc kubenswrapper[4796]: I1125 14:45:33.953782 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:34 crc kubenswrapper[4796]: I1125 14:45:34.925419 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:45:34 crc kubenswrapper[4796]: I1125 14:45:34.986064 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:45:38 crc kubenswrapper[4796]: I1125 14:45:38.520048 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" podUID="821175f1-a773-4def-b744-22423894346c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Nov 25 14:45:38 crc kubenswrapper[4796]: I1125 14:45:38.520705 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:45:40 crc kubenswrapper[4796]: E1125 14:45:40.356024 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 25 14:45:40 crc kubenswrapper[4796]: E1125 14:45:40.357456 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p9kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-ttn2n_openstack(c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:45:40 crc kubenswrapper[4796]: E1125 14:45:40.358766 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-ttn2n" podUID="c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.478750 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.488166 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.500958 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.621294 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e628aee-a53e-4a15-860c-90f3b47de705-horizon-secret-key\") pod \"9e628aee-a53e-4a15-860c-90f3b47de705\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.621846 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-config-data\") pod \"9e628aee-a53e-4a15-860c-90f3b47de705\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.621907 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-scripts\") pod \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622024 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-scripts\") pod \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622076 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e628aee-a53e-4a15-860c-90f3b47de705-logs\") pod \"9e628aee-a53e-4a15-860c-90f3b47de705\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622200 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af3d6422-4023-40e2-92c3-ff9327a3ce5d-horizon-secret-key\") pod \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622244 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-config-data\") pod \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622316 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af3d6422-4023-40e2-92c3-ff9327a3ce5d-logs\") pod \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622353 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-horizon-secret-key\") pod \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622400 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7x4x\" (UniqueName: \"kubernetes.io/projected/af3d6422-4023-40e2-92c3-ff9327a3ce5d-kube-api-access-j7x4x\") pod \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\" (UID: \"af3d6422-4023-40e2-92c3-ff9327a3ce5d\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622447 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c84dq\" (UniqueName: \"kubernetes.io/projected/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-kube-api-access-c84dq\") pod \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622520 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-config-data\") pod \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622636 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-logs\") pod \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\" (UID: \"95bcc316-2c6f-41e2-bfea-e7fae75cacfa\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622676 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-scripts\") pod \"9e628aee-a53e-4a15-860c-90f3b47de705\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.622739 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2vnb\" (UniqueName: \"kubernetes.io/projected/9e628aee-a53e-4a15-860c-90f3b47de705-kube-api-access-j2vnb\") pod \"9e628aee-a53e-4a15-860c-90f3b47de705\" (UID: \"9e628aee-a53e-4a15-860c-90f3b47de705\") " Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.624288 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e628aee-a53e-4a15-860c-90f3b47de705-logs" (OuterVolumeSpecName: "logs") pod "9e628aee-a53e-4a15-860c-90f3b47de705" (UID: "9e628aee-a53e-4a15-860c-90f3b47de705"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.624858 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-logs" (OuterVolumeSpecName: "logs") pod "95bcc316-2c6f-41e2-bfea-e7fae75cacfa" (UID: "95bcc316-2c6f-41e2-bfea-e7fae75cacfa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.624950 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af3d6422-4023-40e2-92c3-ff9327a3ce5d-logs" (OuterVolumeSpecName: "logs") pod "af3d6422-4023-40e2-92c3-ff9327a3ce5d" (UID: "af3d6422-4023-40e2-92c3-ff9327a3ce5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.625305 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-config-data" (OuterVolumeSpecName: "config-data") pod "9e628aee-a53e-4a15-860c-90f3b47de705" (UID: "9e628aee-a53e-4a15-860c-90f3b47de705"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.625326 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-scripts" (OuterVolumeSpecName: "scripts") pod "af3d6422-4023-40e2-92c3-ff9327a3ce5d" (UID: "af3d6422-4023-40e2-92c3-ff9327a3ce5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.625346 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-scripts" (OuterVolumeSpecName: "scripts") pod "95bcc316-2c6f-41e2-bfea-e7fae75cacfa" (UID: "95bcc316-2c6f-41e2-bfea-e7fae75cacfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.626134 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-scripts" (OuterVolumeSpecName: "scripts") pod "9e628aee-a53e-4a15-860c-90f3b47de705" (UID: "9e628aee-a53e-4a15-860c-90f3b47de705"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.625907 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-config-data" (OuterVolumeSpecName: "config-data") pod "95bcc316-2c6f-41e2-bfea-e7fae75cacfa" (UID: "95bcc316-2c6f-41e2-bfea-e7fae75cacfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.626842 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-config-data" (OuterVolumeSpecName: "config-data") pod "af3d6422-4023-40e2-92c3-ff9327a3ce5d" (UID: "af3d6422-4023-40e2-92c3-ff9327a3ce5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.630681 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af3d6422-4023-40e2-92c3-ff9327a3ce5d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "af3d6422-4023-40e2-92c3-ff9327a3ce5d" (UID: "af3d6422-4023-40e2-92c3-ff9327a3ce5d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.633210 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af3d6422-4023-40e2-92c3-ff9327a3ce5d-kube-api-access-j7x4x" (OuterVolumeSpecName: "kube-api-access-j7x4x") pod "af3d6422-4023-40e2-92c3-ff9327a3ce5d" (UID: "af3d6422-4023-40e2-92c3-ff9327a3ce5d"). InnerVolumeSpecName "kube-api-access-j7x4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.633286 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e628aee-a53e-4a15-860c-90f3b47de705-kube-api-access-j2vnb" (OuterVolumeSpecName: "kube-api-access-j2vnb") pod "9e628aee-a53e-4a15-860c-90f3b47de705" (UID: "9e628aee-a53e-4a15-860c-90f3b47de705"). InnerVolumeSpecName "kube-api-access-j2vnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.633444 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e628aee-a53e-4a15-860c-90f3b47de705-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9e628aee-a53e-4a15-860c-90f3b47de705" (UID: "9e628aee-a53e-4a15-860c-90f3b47de705"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.633749 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "95bcc316-2c6f-41e2-bfea-e7fae75cacfa" (UID: "95bcc316-2c6f-41e2-bfea-e7fae75cacfa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.637381 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-kube-api-access-c84dq" (OuterVolumeSpecName: "kube-api-access-c84dq") pod "95bcc316-2c6f-41e2-bfea-e7fae75cacfa" (UID: "95bcc316-2c6f-41e2-bfea-e7fae75cacfa"). InnerVolumeSpecName "kube-api-access-c84dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.727893 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2vnb\" (UniqueName: \"kubernetes.io/projected/9e628aee-a53e-4a15-860c-90f3b47de705-kube-api-access-j2vnb\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.727934 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e628aee-a53e-4a15-860c-90f3b47de705-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.727948 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.727964 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.727976 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.727990 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e628aee-a53e-4a15-860c-90f3b47de705-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.728004 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af3d6422-4023-40e2-92c3-ff9327a3ce5d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.728017 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af3d6422-4023-40e2-92c3-ff9327a3ce5d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.728045 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af3d6422-4023-40e2-92c3-ff9327a3ce5d-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.728058 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.728071 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7x4x\" (UniqueName: \"kubernetes.io/projected/af3d6422-4023-40e2-92c3-ff9327a3ce5d-kube-api-access-j7x4x\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.728084 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c84dq\" (UniqueName: \"kubernetes.io/projected/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-kube-api-access-c84dq\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.728096 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.728108 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95bcc316-2c6f-41e2-bfea-e7fae75cacfa-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.728120 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e628aee-a53e-4a15-860c-90f3b47de705-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.928066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5df6b75c-4v2jt" event={"ID":"95bcc316-2c6f-41e2-bfea-e7fae75cacfa","Type":"ContainerDied","Data":"fb6a2a55de834f405671a16523d250b4732f16e42890e433ea5990474bb1db87"} Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.928114 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b5df6b75c-4v2jt" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.929487 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c7cb86f49-tsk8g" Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.929479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c7cb86f49-tsk8g" event={"ID":"af3d6422-4023-40e2-92c3-ff9327a3ce5d","Type":"ContainerDied","Data":"f70994a2132e4a835229a4c93d97c8d240907cc3bd64d59631504cbd075cdf6a"} Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.936425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cf8d5f86f-2vlzd" event={"ID":"9e628aee-a53e-4a15-860c-90f3b47de705","Type":"ContainerDied","Data":"d7d28c73457d622b503dc0ed4404f49d202f07345035c3a64fbfcfe450f4615c"} Nov 25 14:45:40 crc kubenswrapper[4796]: I1125 14:45:40.936471 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cf8d5f86f-2vlzd" Nov 25 14:45:40 crc kubenswrapper[4796]: E1125 14:45:40.946136 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-ttn2n" podUID="c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" Nov 25 14:45:41 crc kubenswrapper[4796]: I1125 14:45:41.034735 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c7cb86f49-tsk8g"] Nov 25 14:45:41 crc kubenswrapper[4796]: I1125 14:45:41.043898 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c7cb86f49-tsk8g"] Nov 25 14:45:41 crc kubenswrapper[4796]: I1125 14:45:41.078439 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cf8d5f86f-2vlzd"] Nov 25 14:45:41 crc kubenswrapper[4796]: I1125 14:45:41.086747 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5cf8d5f86f-2vlzd"] Nov 25 14:45:41 crc kubenswrapper[4796]: I1125 14:45:41.101273 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b5df6b75c-4v2jt"] Nov 25 14:45:41 crc kubenswrapper[4796]: I1125 14:45:41.108986 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b5df6b75c-4v2jt"] Nov 25 14:45:42 crc kubenswrapper[4796]: I1125 14:45:42.429808 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95bcc316-2c6f-41e2-bfea-e7fae75cacfa" path="/var/lib/kubelet/pods/95bcc316-2c6f-41e2-bfea-e7fae75cacfa/volumes" Nov 25 14:45:42 crc kubenswrapper[4796]: I1125 14:45:42.431971 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e628aee-a53e-4a15-860c-90f3b47de705" path="/var/lib/kubelet/pods/9e628aee-a53e-4a15-860c-90f3b47de705/volumes" Nov 25 14:45:42 crc kubenswrapper[4796]: I1125 14:45:42.433072 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af3d6422-4023-40e2-92c3-ff9327a3ce5d" path="/var/lib/kubelet/pods/af3d6422-4023-40e2-92c3-ff9327a3ce5d/volumes" Nov 25 14:45:43 crc kubenswrapper[4796]: I1125 14:45:43.923985 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:45:43 crc kubenswrapper[4796]: I1125 14:45:43.986454 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-swift-storage-0\") pod \"821175f1-a773-4def-b744-22423894346c\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " Nov 25 14:45:43 crc kubenswrapper[4796]: I1125 14:45:43.986907 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-svc\") pod \"821175f1-a773-4def-b744-22423894346c\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " Nov 25 14:45:43 crc kubenswrapper[4796]: I1125 14:45:43.987067 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-config\") pod \"821175f1-a773-4def-b744-22423894346c\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " Nov 25 14:45:43 crc kubenswrapper[4796]: I1125 14:45:43.987141 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lzld\" (UniqueName: \"kubernetes.io/projected/821175f1-a773-4def-b744-22423894346c-kube-api-access-4lzld\") pod \"821175f1-a773-4def-b744-22423894346c\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " Nov 25 14:45:43 crc kubenswrapper[4796]: I1125 14:45:43.987190 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-nb\") pod \"821175f1-a773-4def-b744-22423894346c\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " Nov 25 14:45:43 crc kubenswrapper[4796]: I1125 14:45:43.987236 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-sb\") pod \"821175f1-a773-4def-b744-22423894346c\" (UID: \"821175f1-a773-4def-b744-22423894346c\") " Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.009199 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821175f1-a773-4def-b744-22423894346c-kube-api-access-4lzld" (OuterVolumeSpecName: "kube-api-access-4lzld") pod "821175f1-a773-4def-b744-22423894346c" (UID: "821175f1-a773-4def-b744-22423894346c"). InnerVolumeSpecName "kube-api-access-4lzld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.011946 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" event={"ID":"821175f1-a773-4def-b744-22423894346c","Type":"ContainerDied","Data":"1c32acd170122510742e9ed99066c9bd9e885d618b85ba6c1ab9bcd0850bd809"} Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.011998 4796 scope.go:117] "RemoveContainer" containerID="c9e9dcd9e4f1f98516f78c5ee8e84d685accb67b14f1d0d698a9cfca628fd119" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.012080 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.056053 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-config" (OuterVolumeSpecName: "config") pod "821175f1-a773-4def-b744-22423894346c" (UID: "821175f1-a773-4def-b744-22423894346c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.076918 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "821175f1-a773-4def-b744-22423894346c" (UID: "821175f1-a773-4def-b744-22423894346c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.077560 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "821175f1-a773-4def-b744-22423894346c" (UID: "821175f1-a773-4def-b744-22423894346c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.077628 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "821175f1-a773-4def-b744-22423894346c" (UID: "821175f1-a773-4def-b744-22423894346c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.079961 4796 scope.go:117] "RemoveContainer" containerID="ded7f77298f21026d964cd45b910c7c502561b7fa19e3af2bf858ab3261ce24e" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.090772 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.090801 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lzld\" (UniqueName: \"kubernetes.io/projected/821175f1-a773-4def-b744-22423894346c-kube-api-access-4lzld\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.090815 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.090827 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.090838 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.091164 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "821175f1-a773-4def-b744-22423894346c" (UID: "821175f1-a773-4def-b744-22423894346c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.192536 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/821175f1-a773-4def-b744-22423894346c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.349230 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-79zq7"] Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.357840 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-79zq7"] Nov 25 14:45:44 crc kubenswrapper[4796]: I1125 14:45:44.380109 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cd9956864-5xkx5"] Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:44.423677 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821175f1-a773-4def-b744-22423894346c" path="/var/lib/kubelet/pods/821175f1-a773-4def-b744-22423894346c/volumes" Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:44.451544 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-674489f5b-nnl97"] Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:44.521254 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-n86kb"] Nov 25 14:45:46 crc kubenswrapper[4796]: W1125 14:45:44.525547 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaecc2e8c_ded8_42b7_b1a3_df3eeedc84f7.slice/crio-af1411edf91c1a46004751f1a322fe7b5f90ddb89b97aba03c1b2b75e2bd97ba WatchSource:0}: Error finding container af1411edf91c1a46004751f1a322fe7b5f90ddb89b97aba03c1b2b75e2bd97ba: Status 404 returned error can't find the container with id af1411edf91c1a46004751f1a322fe7b5f90ddb89b97aba03c1b2b75e2bd97ba Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:44.531664 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 14:45:46 crc kubenswrapper[4796]: W1125 14:45:44.689073 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64d653a7_a2c8_439a_9b7c_733682c79eeb.slice/crio-76b47d57e3d4eecb2e98e6e506d618898a481cd8845ec3e3285b5828e46bc93e WatchSource:0}: Error finding container 76b47d57e3d4eecb2e98e6e506d618898a481cd8845ec3e3285b5828e46bc93e: Status 404 returned error can't find the container with id 76b47d57e3d4eecb2e98e6e506d618898a481cd8845ec3e3285b5828e46bc93e Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:44.701180 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d58n6"] Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:45.023404 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n86kb" event={"ID":"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7","Type":"ContainerStarted","Data":"af1411edf91c1a46004751f1a322fe7b5f90ddb89b97aba03c1b2b75e2bd97ba"} Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:45.024837 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" event={"ID":"64d653a7-a2c8-439a-9b7c-733682c79eeb","Type":"ContainerStarted","Data":"76b47d57e3d4eecb2e98e6e506d618898a481cd8845ec3e3285b5828e46bc93e"} Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:45.026117 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-674489f5b-nnl97" event={"ID":"b8f52433-dd17-499e-8ac4-bda250a52460","Type":"ContainerStarted","Data":"3390694fcfc219523b0c7ca63e0b6baa90502942a1e3ae5eb19377c52e4c8051"} Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:45.029040 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cd9956864-5xkx5" event={"ID":"23942f6c-a777-4b11-a51d-ccaee1fff6e7","Type":"ContainerStarted","Data":"75003a311ed2d81553af263aec7a65ad04eab0b4da26091f438fb6fed28b70c7"} Nov 25 14:45:46 crc kubenswrapper[4796]: E1125 14:45:45.387117 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 25 14:45:46 crc kubenswrapper[4796]: E1125 14:45:45.387309 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6fmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tt8qv_openstack(b0493d28-3276-4a85-a800-4d0b1576c407): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 14:45:46 crc kubenswrapper[4796]: E1125 14:45:45.388756 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tt8qv" podUID="b0493d28-3276-4a85-a800-4d0b1576c407" Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:45.761194 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:45:46 crc kubenswrapper[4796]: W1125 14:45:45.771249 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc287c5_aa10_4895_96ec_98cce7ff2f63.slice/crio-a95eda115bd467e4fb83a4590bab08876a401ab4e1a9c27638a02822012546c9 WatchSource:0}: Error finding container a95eda115bd467e4fb83a4590bab08876a401ab4e1a9c27638a02822012546c9: Status 404 returned error can't find the container with id a95eda115bd467e4fb83a4590bab08876a401ab4e1a9c27638a02822012546c9 Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:46.042562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n86kb" event={"ID":"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7","Type":"ContainerStarted","Data":"a4b36e21ccdfe4148ba49233cd5e90ca89db8ba31e9ac923ae97dcb756dbe492"} Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:46.046461 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2cc287c5-aa10-4895-96ec-98cce7ff2f63","Type":"ContainerStarted","Data":"a95eda115bd467e4fb83a4590bab08876a401ab4e1a9c27638a02822012546c9"} Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:46.053416 4796 generic.go:334] "Generic (PLEG): container finished" podID="64d653a7-a2c8-439a-9b7c-733682c79eeb" containerID="eb5462985df4501cd386f1d622c7d422cbb32538ceadcb98b67632635cf653e7" exitCode=0 Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:46.053500 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" event={"ID":"64d653a7-a2c8-439a-9b7c-733682c79eeb","Type":"ContainerDied","Data":"eb5462985df4501cd386f1d622c7d422cbb32538ceadcb98b67632635cf653e7"} Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:46.067478 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-n86kb" podStartSLOduration=14.067458249 podStartE2EDuration="14.067458249s" podCreationTimestamp="2025-11-25 14:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:46.058869371 +0000 UTC m=+1274.401978785" watchObservedRunningTime="2025-11-25 14:45:46.067458249 +0000 UTC m=+1274.410567673" Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:46.069810 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vmgbv" event={"ID":"1e99260e-8b90-4cd0-8417-8dc3c142a743","Type":"ContainerStarted","Data":"09f3104b61f642b98d2f0d8f9e593a2b57967e74c31b223316fefe3075fdb61b"} Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:46.076344 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerStarted","Data":"7c729dcdefc39f9939dd17140f8926e4086c5e4db268a5be72f078d7030e32a6"} Nov 25 14:45:46 crc kubenswrapper[4796]: E1125 14:45:46.086624 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tt8qv" podUID="b0493d28-3276-4a85-a800-4d0b1576c407" Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:46.162920 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-vmgbv" podStartSLOduration=7.414976084 podStartE2EDuration="33.162901024s" podCreationTimestamp="2025-11-25 14:45:13 +0000 UTC" firstStartedPulling="2025-11-25 14:45:14.596349309 +0000 UTC m=+1242.939458733" lastFinishedPulling="2025-11-25 14:45:40.344274249 +0000 UTC m=+1268.687383673" observedRunningTime="2025-11-25 14:45:46.103051838 +0000 UTC m=+1274.446161272" watchObservedRunningTime="2025-11-25 14:45:46.162901024 +0000 UTC m=+1274.506010448" Nov 25 14:45:46 crc kubenswrapper[4796]: I1125 14:45:46.287811 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:45:46 crc kubenswrapper[4796]: W1125 14:45:46.316870 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32891459_a961_4f3a_9820_5eb167599bd9.slice/crio-cc48efd26a85593c5c7694f2f994c3afb14c9a94303c09c43760b067f645130d WatchSource:0}: Error finding container cc48efd26a85593c5c7694f2f994c3afb14c9a94303c09c43760b067f645130d: Status 404 returned error can't find the container with id cc48efd26a85593c5c7694f2f994c3afb14c9a94303c09c43760b067f645130d Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.085606 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" event={"ID":"64d653a7-a2c8-439a-9b7c-733682c79eeb","Type":"ContainerStarted","Data":"9ca91f1d43260b84d191e65ad3667fbaa905792f9e3e5333af6da6674259b85f"} Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.087116 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.088947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2cc287c5-aa10-4895-96ec-98cce7ff2f63","Type":"ContainerStarted","Data":"34a4cb52bb5cd84279869e5338c98a5091c9559a5881e3100a9dac0a7fae099c"} Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.092769 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-674489f5b-nnl97" event={"ID":"b8f52433-dd17-499e-8ac4-bda250a52460","Type":"ContainerStarted","Data":"e091ae7cb4fcbcb870e073c0a88aaeabe8d44cb0dca42f2894fd60614b94f391"} Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.092817 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-674489f5b-nnl97" event={"ID":"b8f52433-dd17-499e-8ac4-bda250a52460","Type":"ContainerStarted","Data":"8c7d1e827e30077c0108d145651fb4114981e69cd63bbfe59f033fc641a906c2"} Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.097920 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cd9956864-5xkx5" event={"ID":"23942f6c-a777-4b11-a51d-ccaee1fff6e7","Type":"ContainerStarted","Data":"11b12a44fb12af68b288b537b801c1396328386b617bea8abdea96586dfce0b0"} Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.097962 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cd9956864-5xkx5" event={"ID":"23942f6c-a777-4b11-a51d-ccaee1fff6e7","Type":"ContainerStarted","Data":"ae453e3aaa7cbba30fd5bc3de23897fd5dc332bf3b291917085c8ce4126081c4"} Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.101615 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32891459-a961-4f3a-9820-5eb167599bd9","Type":"ContainerStarted","Data":"d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7"} Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.101657 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32891459-a961-4f3a-9820-5eb167599bd9","Type":"ContainerStarted","Data":"cc48efd26a85593c5c7694f2f994c3afb14c9a94303c09c43760b067f645130d"} Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.112002 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" podStartSLOduration=15.111981232 podStartE2EDuration="15.111981232s" podCreationTimestamp="2025-11-25 14:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:47.104799767 +0000 UTC m=+1275.447909231" watchObservedRunningTime="2025-11-25 14:45:47.111981232 +0000 UTC m=+1275.455090666" Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.134465 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cd9956864-5xkx5" podStartSLOduration=24.631002212 podStartE2EDuration="26.134445572s" podCreationTimestamp="2025-11-25 14:45:21 +0000 UTC" firstStartedPulling="2025-11-25 14:45:44.388398854 +0000 UTC m=+1272.731508278" lastFinishedPulling="2025-11-25 14:45:45.891842214 +0000 UTC m=+1274.234951638" observedRunningTime="2025-11-25 14:45:47.123435319 +0000 UTC m=+1275.466544763" watchObservedRunningTime="2025-11-25 14:45:47.134445572 +0000 UTC m=+1275.477555006" Nov 25 14:45:47 crc kubenswrapper[4796]: I1125 14:45:47.165446 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-674489f5b-nnl97" podStartSLOduration=23.736127089 podStartE2EDuration="25.165409527s" podCreationTimestamp="2025-11-25 14:45:22 +0000 UTC" firstStartedPulling="2025-11-25 14:45:44.463245547 +0000 UTC m=+1272.806354971" lastFinishedPulling="2025-11-25 14:45:45.892527985 +0000 UTC m=+1274.235637409" observedRunningTime="2025-11-25 14:45:47.150016427 +0000 UTC m=+1275.493125871" watchObservedRunningTime="2025-11-25 14:45:47.165409527 +0000 UTC m=+1275.508518951" Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.112730 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerStarted","Data":"51e66da980cc6d21137e7e19efabd2613836b2e47f8a568239f4ef10a6c89ff7"} Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.116216 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2cc287c5-aa10-4895-96ec-98cce7ff2f63","Type":"ContainerStarted","Data":"148af3a4b2d78267ecc95158699ca4a686546fc9a707228e9eb139836d25883b"} Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.116345 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerName="glance-log" containerID="cri-o://34a4cb52bb5cd84279869e5338c98a5091c9559a5881e3100a9dac0a7fae099c" gracePeriod=30 Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.116868 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerName="glance-httpd" containerID="cri-o://148af3a4b2d78267ecc95158699ca4a686546fc9a707228e9eb139836d25883b" gracePeriod=30 Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.125415 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="32891459-a961-4f3a-9820-5eb167599bd9" containerName="glance-log" containerID="cri-o://d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7" gracePeriod=30 Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.125527 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32891459-a961-4f3a-9820-5eb167599bd9","Type":"ContainerStarted","Data":"a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec"} Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.126633 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="32891459-a961-4f3a-9820-5eb167599bd9" containerName="glance-httpd" containerID="cri-o://a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec" gracePeriod=30 Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.142272 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.142257121 podStartE2EDuration="16.142257121s" podCreationTimestamp="2025-11-25 14:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:48.140149305 +0000 UTC m=+1276.483258859" watchObservedRunningTime="2025-11-25 14:45:48.142257121 +0000 UTC m=+1276.485366545" Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.167712 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.167690084 podStartE2EDuration="16.167690084s" podCreationTimestamp="2025-11-25 14:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:48.163903495 +0000 UTC m=+1276.507012939" watchObservedRunningTime="2025-11-25 14:45:48.167690084 +0000 UTC m=+1276.510799508" Nov 25 14:45:48 crc kubenswrapper[4796]: I1125 14:45:48.519984 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-79zq7" podUID="821175f1-a773-4def-b744-22423894346c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: i/o timeout" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:48.999759 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.077937 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-scripts\") pod \"32891459-a961-4f3a-9820-5eb167599bd9\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.082470 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"32891459-a961-4f3a-9820-5eb167599bd9\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.082783 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klgvn\" (UniqueName: \"kubernetes.io/projected/32891459-a961-4f3a-9820-5eb167599bd9-kube-api-access-klgvn\") pod \"32891459-a961-4f3a-9820-5eb167599bd9\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.082805 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-logs\") pod \"32891459-a961-4f3a-9820-5eb167599bd9\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.082826 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-httpd-run\") pod \"32891459-a961-4f3a-9820-5eb167599bd9\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.082881 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-config-data\") pod \"32891459-a961-4f3a-9820-5eb167599bd9\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.083022 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-combined-ca-bundle\") pod \"32891459-a961-4f3a-9820-5eb167599bd9\" (UID: \"32891459-a961-4f3a-9820-5eb167599bd9\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.083990 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "32891459-a961-4f3a-9820-5eb167599bd9" (UID: "32891459-a961-4f3a-9820-5eb167599bd9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.084036 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-logs" (OuterVolumeSpecName: "logs") pod "32891459-a961-4f3a-9820-5eb167599bd9" (UID: "32891459-a961-4f3a-9820-5eb167599bd9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.087593 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32891459-a961-4f3a-9820-5eb167599bd9-kube-api-access-klgvn" (OuterVolumeSpecName: "kube-api-access-klgvn") pod "32891459-a961-4f3a-9820-5eb167599bd9" (UID: "32891459-a961-4f3a-9820-5eb167599bd9"). InnerVolumeSpecName "kube-api-access-klgvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.088717 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "32891459-a961-4f3a-9820-5eb167599bd9" (UID: "32891459-a961-4f3a-9820-5eb167599bd9"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.097047 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-scripts" (OuterVolumeSpecName: "scripts") pod "32891459-a961-4f3a-9820-5eb167599bd9" (UID: "32891459-a961-4f3a-9820-5eb167599bd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.127486 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32891459-a961-4f3a-9820-5eb167599bd9" (UID: "32891459-a961-4f3a-9820-5eb167599bd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.141673 4796 generic.go:334] "Generic (PLEG): container finished" podID="32891459-a961-4f3a-9820-5eb167599bd9" containerID="a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec" exitCode=143 Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.141710 4796 generic.go:334] "Generic (PLEG): container finished" podID="32891459-a961-4f3a-9820-5eb167599bd9" containerID="d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7" exitCode=143 Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.141751 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32891459-a961-4f3a-9820-5eb167599bd9","Type":"ContainerDied","Data":"a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec"} Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.141778 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32891459-a961-4f3a-9820-5eb167599bd9","Type":"ContainerDied","Data":"d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7"} Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.141791 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32891459-a961-4f3a-9820-5eb167599bd9","Type":"ContainerDied","Data":"cc48efd26a85593c5c7694f2f994c3afb14c9a94303c09c43760b067f645130d"} Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.141806 4796 scope.go:117] "RemoveContainer" containerID="a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.141961 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.157196 4796 generic.go:334] "Generic (PLEG): container finished" podID="aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" containerID="a4b36e21ccdfe4148ba49233cd5e90ca89db8ba31e9ac923ae97dcb756dbe492" exitCode=0 Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.157265 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n86kb" event={"ID":"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7","Type":"ContainerDied","Data":"a4b36e21ccdfe4148ba49233cd5e90ca89db8ba31e9ac923ae97dcb756dbe492"} Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.160045 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-config-data" (OuterVolumeSpecName: "config-data") pod "32891459-a961-4f3a-9820-5eb167599bd9" (UID: "32891459-a961-4f3a-9820-5eb167599bd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.162559 4796 generic.go:334] "Generic (PLEG): container finished" podID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerID="148af3a4b2d78267ecc95158699ca4a686546fc9a707228e9eb139836d25883b" exitCode=0 Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.162611 4796 generic.go:334] "Generic (PLEG): container finished" podID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerID="34a4cb52bb5cd84279869e5338c98a5091c9559a5881e3100a9dac0a7fae099c" exitCode=143 Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.163649 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2cc287c5-aa10-4895-96ec-98cce7ff2f63","Type":"ContainerDied","Data":"148af3a4b2d78267ecc95158699ca4a686546fc9a707228e9eb139836d25883b"} Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.163710 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2cc287c5-aa10-4895-96ec-98cce7ff2f63","Type":"ContainerDied","Data":"34a4cb52bb5cd84279869e5338c98a5091c9559a5881e3100a9dac0a7fae099c"} Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.184991 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.185033 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.185068 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.185081 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klgvn\" (UniqueName: \"kubernetes.io/projected/32891459-a961-4f3a-9820-5eb167599bd9-kube-api-access-klgvn\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.185095 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.185105 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32891459-a961-4f3a-9820-5eb167599bd9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.185116 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32891459-a961-4f3a-9820-5eb167599bd9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.204760 4796 scope.go:117] "RemoveContainer" containerID="d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.205109 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.229893 4796 scope.go:117] "RemoveContainer" containerID="a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec" Nov 25 14:45:49 crc kubenswrapper[4796]: E1125 14:45:49.230487 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec\": container with ID starting with a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec not found: ID does not exist" containerID="a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.230531 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec"} err="failed to get container status \"a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec\": rpc error: code = NotFound desc = could not find container \"a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec\": container with ID starting with a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec not found: ID does not exist" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.230557 4796 scope.go:117] "RemoveContainer" containerID="d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7" Nov 25 14:45:49 crc kubenswrapper[4796]: E1125 14:45:49.230979 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7\": container with ID starting with d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7 not found: ID does not exist" containerID="d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.231020 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7"} err="failed to get container status \"d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7\": rpc error: code = NotFound desc = could not find container \"d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7\": container with ID starting with d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7 not found: ID does not exist" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.231063 4796 scope.go:117] "RemoveContainer" containerID="a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.231320 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec"} err="failed to get container status \"a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec\": rpc error: code = NotFound desc = could not find container \"a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec\": container with ID starting with a7a5c9a5d580a93d1530c8edb09eeec6e8fcb9401ce1de7a202189c5573893ec not found: ID does not exist" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.231339 4796 scope.go:117] "RemoveContainer" containerID="d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.231730 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7"} err="failed to get container status \"d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7\": rpc error: code = NotFound desc = could not find container \"d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7\": container with ID starting with d98ceafcc5f829a7111383dc6d8fa5e6f20c71e4c963c79ccd29c3c5879bb6f7 not found: ID does not exist" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.244453 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.286661 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.387985 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-combined-ca-bundle\") pod \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.388019 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.388065 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-config-data\") pod \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.388146 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-scripts\") pod \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.388190 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-httpd-run\") pod \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.388227 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-logs\") pod \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.388276 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnrbj\" (UniqueName: \"kubernetes.io/projected/2cc287c5-aa10-4895-96ec-98cce7ff2f63-kube-api-access-rnrbj\") pod \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\" (UID: \"2cc287c5-aa10-4895-96ec-98cce7ff2f63\") " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.389106 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-logs" (OuterVolumeSpecName: "logs") pod "2cc287c5-aa10-4895-96ec-98cce7ff2f63" (UID: "2cc287c5-aa10-4895-96ec-98cce7ff2f63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.389418 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2cc287c5-aa10-4895-96ec-98cce7ff2f63" (UID: "2cc287c5-aa10-4895-96ec-98cce7ff2f63"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.389636 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.389653 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc287c5-aa10-4895-96ec-98cce7ff2f63-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.392046 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "2cc287c5-aa10-4895-96ec-98cce7ff2f63" (UID: "2cc287c5-aa10-4895-96ec-98cce7ff2f63"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.392684 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc287c5-aa10-4895-96ec-98cce7ff2f63-kube-api-access-rnrbj" (OuterVolumeSpecName: "kube-api-access-rnrbj") pod "2cc287c5-aa10-4895-96ec-98cce7ff2f63" (UID: "2cc287c5-aa10-4895-96ec-98cce7ff2f63"). InnerVolumeSpecName "kube-api-access-rnrbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.421207 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-scripts" (OuterVolumeSpecName: "scripts") pod "2cc287c5-aa10-4895-96ec-98cce7ff2f63" (UID: "2cc287c5-aa10-4895-96ec-98cce7ff2f63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.438001 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cc287c5-aa10-4895-96ec-98cce7ff2f63" (UID: "2cc287c5-aa10-4895-96ec-98cce7ff2f63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.448760 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-config-data" (OuterVolumeSpecName: "config-data") pod "2cc287c5-aa10-4895-96ec-98cce7ff2f63" (UID: "2cc287c5-aa10-4895-96ec-98cce7ff2f63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.491245 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.491302 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.491316 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.491329 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc287c5-aa10-4895-96ec-98cce7ff2f63-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.491342 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnrbj\" (UniqueName: \"kubernetes.io/projected/2cc287c5-aa10-4895-96ec-98cce7ff2f63-kube-api-access-rnrbj\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.507599 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.538847 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.545757 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.570528 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:45:49 crc kubenswrapper[4796]: E1125 14:45:49.571037 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerName="glance-httpd" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571060 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerName="glance-httpd" Nov 25 14:45:49 crc kubenswrapper[4796]: E1125 14:45:49.571085 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerName="glance-log" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571094 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerName="glance-log" Nov 25 14:45:49 crc kubenswrapper[4796]: E1125 14:45:49.571121 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821175f1-a773-4def-b744-22423894346c" containerName="init" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571133 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="821175f1-a773-4def-b744-22423894346c" containerName="init" Nov 25 14:45:49 crc kubenswrapper[4796]: E1125 14:45:49.571156 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32891459-a961-4f3a-9820-5eb167599bd9" containerName="glance-log" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571164 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="32891459-a961-4f3a-9820-5eb167599bd9" containerName="glance-log" Nov 25 14:45:49 crc kubenswrapper[4796]: E1125 14:45:49.571188 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32891459-a961-4f3a-9820-5eb167599bd9" containerName="glance-httpd" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571197 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="32891459-a961-4f3a-9820-5eb167599bd9" containerName="glance-httpd" Nov 25 14:45:49 crc kubenswrapper[4796]: E1125 14:45:49.571209 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821175f1-a773-4def-b744-22423894346c" containerName="dnsmasq-dns" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571217 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="821175f1-a773-4def-b744-22423894346c" containerName="dnsmasq-dns" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571465 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="821175f1-a773-4def-b744-22423894346c" containerName="dnsmasq-dns" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571478 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerName="glance-httpd" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571492 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" containerName="glance-log" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571504 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="32891459-a961-4f3a-9820-5eb167599bd9" containerName="glance-log" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.571516 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="32891459-a961-4f3a-9820-5eb167599bd9" containerName="glance-httpd" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.572782 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.574800 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.576418 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.583141 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.594147 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.695694 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.695753 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.695918 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.695976 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.696012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.696107 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.696152 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-logs\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.696269 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tng4\" (UniqueName: \"kubernetes.io/projected/995ed35a-afd4-48d3-af01-e35145fdf1f0-kube-api-access-5tng4\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.798107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.798163 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.798207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.798258 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.798347 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.798444 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.798510 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-logs\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.798559 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tng4\" (UniqueName: \"kubernetes.io/projected/995ed35a-afd4-48d3-af01-e35145fdf1f0-kube-api-access-5tng4\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.798627 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.799109 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-logs\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.799207 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.806885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.807022 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.809650 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.810280 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.815257 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tng4\" (UniqueName: \"kubernetes.io/projected/995ed35a-afd4-48d3-af01-e35145fdf1f0-kube-api-access-5tng4\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.830762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:45:49 crc kubenswrapper[4796]: I1125 14:45:49.901795 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.172945 4796 generic.go:334] "Generic (PLEG): container finished" podID="1e99260e-8b90-4cd0-8417-8dc3c142a743" containerID="09f3104b61f642b98d2f0d8f9e593a2b57967e74c31b223316fefe3075fdb61b" exitCode=0 Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.173051 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vmgbv" event={"ID":"1e99260e-8b90-4cd0-8417-8dc3c142a743","Type":"ContainerDied","Data":"09f3104b61f642b98d2f0d8f9e593a2b57967e74c31b223316fefe3075fdb61b"} Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.178277 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.187815 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2cc287c5-aa10-4895-96ec-98cce7ff2f63","Type":"ContainerDied","Data":"a95eda115bd467e4fb83a4590bab08876a401ab4e1a9c27638a02822012546c9"} Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.187906 4796 scope.go:117] "RemoveContainer" containerID="148af3a4b2d78267ecc95158699ca4a686546fc9a707228e9eb139836d25883b" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.226119 4796 scope.go:117] "RemoveContainer" containerID="34a4cb52bb5cd84279869e5338c98a5091c9559a5881e3100a9dac0a7fae099c" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.247243 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.278100 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.288730 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.290262 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.294005 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.294170 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.303472 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.420386 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-logs\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.420435 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.420485 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.420611 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzjv\" (UniqueName: \"kubernetes.io/projected/33a4eb54-4ef7-4290-9030-d957632b40c0-kube-api-access-dbzjv\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.420662 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.420792 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.420821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.420839 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.426111 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc287c5-aa10-4895-96ec-98cce7ff2f63" path="/var/lib/kubelet/pods/2cc287c5-aa10-4895-96ec-98cce7ff2f63/volumes" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.427113 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32891459-a961-4f3a-9820-5eb167599bd9" path="/var/lib/kubelet/pods/32891459-a961-4f3a-9820-5eb167599bd9/volumes" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.442220 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.522808 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.522859 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.522884 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.524016 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-logs\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.524048 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.524117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.523331 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.524462 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-logs\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.524695 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.524735 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzjv\" (UniqueName: \"kubernetes.io/projected/33a4eb54-4ef7-4290-9030-d957632b40c0-kube-api-access-dbzjv\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.524754 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.530641 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.535165 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.540233 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.542025 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.550179 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzjv\" (UniqueName: \"kubernetes.io/projected/33a4eb54-4ef7-4290-9030-d957632b40c0-kube-api-access-dbzjv\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.565778 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " pod="openstack/glance-default-external-api-0" Nov 25 14:45:50 crc kubenswrapper[4796]: I1125 14:45:50.668739 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.207829 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-n86kb" event={"ID":"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7","Type":"ContainerDied","Data":"af1411edf91c1a46004751f1a322fe7b5f90ddb89b97aba03c1b2b75e2bd97ba"} Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.208413 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af1411edf91c1a46004751f1a322fe7b5f90ddb89b97aba03c1b2b75e2bd97ba" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.307845 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.315496 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.315543 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.424709 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.424747 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.455109 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-scripts\") pod \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.455207 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-config-data\") pod \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.455292 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skx5m\" (UniqueName: \"kubernetes.io/projected/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-kube-api-access-skx5m\") pod \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.455321 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-credential-keys\") pod \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.455354 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-combined-ca-bundle\") pod \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.455423 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-fernet-keys\") pod \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\" (UID: \"aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7\") " Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.461127 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-kube-api-access-skx5m" (OuterVolumeSpecName: "kube-api-access-skx5m") pod "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" (UID: "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7"). InnerVolumeSpecName "kube-api-access-skx5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.463408 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" (UID: "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.464010 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" (UID: "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.466193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-scripts" (OuterVolumeSpecName: "scripts") pod "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" (UID: "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.501130 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" (UID: "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.532797 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-config-data" (OuterVolumeSpecName: "config-data") pod "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" (UID: "aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.558128 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.558395 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.558410 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skx5m\" (UniqueName: \"kubernetes.io/projected/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-kube-api-access-skx5m\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.558423 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.558434 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.558442 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.861808 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.949197 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-45fs2"] Nov 25 14:45:52 crc kubenswrapper[4796]: I1125 14:45:52.950821 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" podUID="fa2e8489-181d-4c50-b9c5-484432e7e070" containerName="dnsmasq-dns" containerID="cri-o://5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527" gracePeriod=10 Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.216417 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-n86kb" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.427136 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5d994c97d7-9qxnr"] Nov 25 14:45:53 crc kubenswrapper[4796]: E1125 14:45:53.427793 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" containerName="keystone-bootstrap" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.427891 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" containerName="keystone-bootstrap" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.428173 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" containerName="keystone-bootstrap" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.428963 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.432386 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.432893 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.433494 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.433654 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-8z6zn" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.434085 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.434245 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.448391 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d994c97d7-9qxnr"] Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.475805 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-public-tls-certs\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.477996 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-config-data\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.478106 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-internal-tls-certs\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.478427 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-fernet-keys\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.478687 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-scripts\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.478805 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-combined-ca-bundle\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.478850 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwjh6\" (UniqueName: \"kubernetes.io/projected/47119c19-fca4-4a63-8170-d4dee8201af8-kube-api-access-cwjh6\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.479849 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-credential-keys\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.588462 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-public-tls-certs\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.588515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-config-data\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.588551 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-internal-tls-certs\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.588591 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-fernet-keys\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.588623 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-scripts\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.588659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwjh6\" (UniqueName: \"kubernetes.io/projected/47119c19-fca4-4a63-8170-d4dee8201af8-kube-api-access-cwjh6\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.588675 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-combined-ca-bundle\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.588721 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-credential-keys\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.607591 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-public-tls-certs\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.607934 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-credential-keys\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.608425 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-combined-ca-bundle\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.609037 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-config-data\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.611920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-internal-tls-certs\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.612513 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-fernet-keys\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.627208 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwjh6\" (UniqueName: \"kubernetes.io/projected/47119c19-fca4-4a63-8170-d4dee8201af8-kube-api-access-cwjh6\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.627917 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47119c19-fca4-4a63-8170-d4dee8201af8-scripts\") pod \"keystone-5d994c97d7-9qxnr\" (UID: \"47119c19-fca4-4a63-8170-d4dee8201af8\") " pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.743588 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:53 crc kubenswrapper[4796]: I1125 14:45:53.925395 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" podUID="fa2e8489-181d-4c50-b9c5-484432e7e070" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.826910 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.911535 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwkq8\" (UniqueName: \"kubernetes.io/projected/1e99260e-8b90-4cd0-8417-8dc3c142a743-kube-api-access-fwkq8\") pod \"1e99260e-8b90-4cd0-8417-8dc3c142a743\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.912046 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-combined-ca-bundle\") pod \"1e99260e-8b90-4cd0-8417-8dc3c142a743\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.912095 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-scripts\") pod \"1e99260e-8b90-4cd0-8417-8dc3c142a743\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.912168 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e99260e-8b90-4cd0-8417-8dc3c142a743-logs\") pod \"1e99260e-8b90-4cd0-8417-8dc3c142a743\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.912243 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-config-data\") pod \"1e99260e-8b90-4cd0-8417-8dc3c142a743\" (UID: \"1e99260e-8b90-4cd0-8417-8dc3c142a743\") " Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.916057 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e99260e-8b90-4cd0-8417-8dc3c142a743-logs" (OuterVolumeSpecName: "logs") pod "1e99260e-8b90-4cd0-8417-8dc3c142a743" (UID: "1e99260e-8b90-4cd0-8417-8dc3c142a743"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.916969 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e99260e-8b90-4cd0-8417-8dc3c142a743-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.918141 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e99260e-8b90-4cd0-8417-8dc3c142a743-kube-api-access-fwkq8" (OuterVolumeSpecName: "kube-api-access-fwkq8") pod "1e99260e-8b90-4cd0-8417-8dc3c142a743" (UID: "1e99260e-8b90-4cd0-8417-8dc3c142a743"). InnerVolumeSpecName "kube-api-access-fwkq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.941030 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-scripts" (OuterVolumeSpecName: "scripts") pod "1e99260e-8b90-4cd0-8417-8dc3c142a743" (UID: "1e99260e-8b90-4cd0-8417-8dc3c142a743"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.943792 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-config-data" (OuterVolumeSpecName: "config-data") pod "1e99260e-8b90-4cd0-8417-8dc3c142a743" (UID: "1e99260e-8b90-4cd0-8417-8dc3c142a743"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:54 crc kubenswrapper[4796]: I1125 14:45:54.956316 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e99260e-8b90-4cd0-8417-8dc3c142a743" (UID: "1e99260e-8b90-4cd0-8417-8dc3c142a743"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.018759 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwkq8\" (UniqueName: \"kubernetes.io/projected/1e99260e-8b90-4cd0-8417-8dc3c142a743-kube-api-access-fwkq8\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.018787 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.018797 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.018805 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e99260e-8b90-4cd0-8417-8dc3c142a743-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.022857 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.121072 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-config\") pod \"fa2e8489-181d-4c50-b9c5-484432e7e070\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.122230 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-swift-storage-0\") pod \"fa2e8489-181d-4c50-b9c5-484432e7e070\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.122604 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkjl4\" (UniqueName: \"kubernetes.io/projected/fa2e8489-181d-4c50-b9c5-484432e7e070-kube-api-access-nkjl4\") pod \"fa2e8489-181d-4c50-b9c5-484432e7e070\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.122642 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-nb\") pod \"fa2e8489-181d-4c50-b9c5-484432e7e070\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.122673 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-sb\") pod \"fa2e8489-181d-4c50-b9c5-484432e7e070\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.122707 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-svc\") pod \"fa2e8489-181d-4c50-b9c5-484432e7e070\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.127850 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2e8489-181d-4c50-b9c5-484432e7e070-kube-api-access-nkjl4" (OuterVolumeSpecName: "kube-api-access-nkjl4") pod "fa2e8489-181d-4c50-b9c5-484432e7e070" (UID: "fa2e8489-181d-4c50-b9c5-484432e7e070"). InnerVolumeSpecName "kube-api-access-nkjl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.177797 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa2e8489-181d-4c50-b9c5-484432e7e070" (UID: "fa2e8489-181d-4c50-b9c5-484432e7e070"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.178795 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa2e8489-181d-4c50-b9c5-484432e7e070" (UID: "fa2e8489-181d-4c50-b9c5-484432e7e070"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.185108 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa2e8489-181d-4c50-b9c5-484432e7e070" (UID: "fa2e8489-181d-4c50-b9c5-484432e7e070"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:55 crc kubenswrapper[4796]: E1125 14:45:55.190629 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-swift-storage-0 podName:fa2e8489-181d-4c50-b9c5-484432e7e070 nodeName:}" failed. No retries permitted until 2025-11-25 14:45:55.690601412 +0000 UTC m=+1284.033710836 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-swift-storage-0" (UniqueName: "kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-swift-storage-0") pod "fa2e8489-181d-4c50-b9c5-484432e7e070" (UID: "fa2e8489-181d-4c50-b9c5-484432e7e070") : error deleting /var/lib/kubelet/pods/fa2e8489-181d-4c50-b9c5-484432e7e070/volume-subpaths: remove /var/lib/kubelet/pods/fa2e8489-181d-4c50-b9c5-484432e7e070/volume-subpaths: no such file or directory Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.190918 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-config" (OuterVolumeSpecName: "config") pod "fa2e8489-181d-4c50-b9c5-484432e7e070" (UID: "fa2e8489-181d-4c50-b9c5-484432e7e070"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.205153 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5d994c97d7-9qxnr"] Nov 25 14:45:55 crc kubenswrapper[4796]: W1125 14:45:55.207111 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47119c19_fca4_4a63_8170_d4dee8201af8.slice/crio-b2a558a7b82899bbee5d1b6bad4935658c7b87a481528305425353536b8a21f3 WatchSource:0}: Error finding container b2a558a7b82899bbee5d1b6bad4935658c7b87a481528305425353536b8a21f3: Status 404 returned error can't find the container with id b2a558a7b82899bbee5d1b6bad4935658c7b87a481528305425353536b8a21f3 Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.227098 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.227128 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkjl4\" (UniqueName: \"kubernetes.io/projected/fa2e8489-181d-4c50-b9c5-484432e7e070-kube-api-access-nkjl4\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.227141 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.227152 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.227161 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.237259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vmgbv" event={"ID":"1e99260e-8b90-4cd0-8417-8dc3c142a743","Type":"ContainerDied","Data":"dc8e2cbc5e596fc06fe737a65646969b3176cb69019192d35321c7fa9edac52d"} Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.237295 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc8e2cbc5e596fc06fe737a65646969b3176cb69019192d35321c7fa9edac52d" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.237348 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vmgbv" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.243624 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d994c97d7-9qxnr" event={"ID":"47119c19-fca4-4a63-8170-d4dee8201af8","Type":"ContainerStarted","Data":"b2a558a7b82899bbee5d1b6bad4935658c7b87a481528305425353536b8a21f3"} Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.250597 4796 generic.go:334] "Generic (PLEG): container finished" podID="182a7451-724e-4649-a911-f26535ec04f9" containerID="b5f9a3f112bf73deb92d34f9303693dd31b99f2aa0bf345c73078397cc705f6e" exitCode=0 Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.250673 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8fxjq" event={"ID":"182a7451-724e-4649-a911-f26535ec04f9","Type":"ContainerDied","Data":"b5f9a3f112bf73deb92d34f9303693dd31b99f2aa0bf345c73078397cc705f6e"} Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.253797 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttn2n" event={"ID":"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2","Type":"ContainerStarted","Data":"2c63332215ffc1b35fcf28a45694b19de2296c9096f319e360944a2cfea88350"} Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.256591 4796 generic.go:334] "Generic (PLEG): container finished" podID="fa2e8489-181d-4c50-b9c5-484432e7e070" containerID="5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527" exitCode=0 Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.256653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" event={"ID":"fa2e8489-181d-4c50-b9c5-484432e7e070","Type":"ContainerDied","Data":"5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527"} Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.256680 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" event={"ID":"fa2e8489-181d-4c50-b9c5-484432e7e070","Type":"ContainerDied","Data":"b99932f691c72032a4a04e2846b72bb600853850c8553989e1cb85f97cee0d9e"} Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.256728 4796 scope.go:117] "RemoveContainer" containerID="5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.256741 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-45fs2" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.259513 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"995ed35a-afd4-48d3-af01-e35145fdf1f0","Type":"ContainerStarted","Data":"d66d599e056e67e686e5aa61d621a456b12d1a40d060c7b0154a83af20748556"} Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.292554 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-ttn2n" podStartSLOduration=2.215045206 podStartE2EDuration="42.2925365s" podCreationTimestamp="2025-11-25 14:45:13 +0000 UTC" firstStartedPulling="2025-11-25 14:45:14.907350624 +0000 UTC m=+1243.250460048" lastFinishedPulling="2025-11-25 14:45:54.984841918 +0000 UTC m=+1283.327951342" observedRunningTime="2025-11-25 14:45:55.287101411 +0000 UTC m=+1283.630210835" watchObservedRunningTime="2025-11-25 14:45:55.2925365 +0000 UTC m=+1283.635645924" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.305278 4796 scope.go:117] "RemoveContainer" containerID="ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.330414 4796 scope.go:117] "RemoveContainer" containerID="5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527" Nov 25 14:45:55 crc kubenswrapper[4796]: E1125 14:45:55.332497 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527\": container with ID starting with 5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527 not found: ID does not exist" containerID="5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.332534 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527"} err="failed to get container status \"5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527\": rpc error: code = NotFound desc = could not find container \"5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527\": container with ID starting with 5c985db77f68d3cca3774ab5f365c45b689f277713d8f213ac0a9adea49da527 not found: ID does not exist" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.332559 4796 scope.go:117] "RemoveContainer" containerID="ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271" Nov 25 14:45:55 crc kubenswrapper[4796]: E1125 14:45:55.333042 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271\": container with ID starting with ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271 not found: ID does not exist" containerID="ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.333113 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271"} err="failed to get container status \"ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271\": rpc error: code = NotFound desc = could not find container \"ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271\": container with ID starting with ebf03709a7671242e78f83c2219185f594b0fc0e62c717fbc25b86db7014a271 not found: ID does not exist" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.390030 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:45:55 crc kubenswrapper[4796]: W1125 14:45:55.398168 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33a4eb54_4ef7_4290_9030_d957632b40c0.slice/crio-ef57ac5a3abb6a5c98f7993a739503b0942e30d189e9515f5d47d64e17183d86 WatchSource:0}: Error finding container ef57ac5a3abb6a5c98f7993a739503b0942e30d189e9515f5d47d64e17183d86: Status 404 returned error can't find the container with id ef57ac5a3abb6a5c98f7993a739503b0942e30d189e9515f5d47d64e17183d86 Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.739188 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-swift-storage-0\") pod \"fa2e8489-181d-4c50-b9c5-484432e7e070\" (UID: \"fa2e8489-181d-4c50-b9c5-484432e7e070\") " Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.739876 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fa2e8489-181d-4c50-b9c5-484432e7e070" (UID: "fa2e8489-181d-4c50-b9c5-484432e7e070"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.841381 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa2e8489-181d-4c50-b9c5-484432e7e070-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.894673 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-45fs2"] Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.901192 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-45fs2"] Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.943448 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-79bd96dcd6-f2n5f"] Nov 25 14:45:55 crc kubenswrapper[4796]: E1125 14:45:55.943860 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2e8489-181d-4c50-b9c5-484432e7e070" containerName="dnsmasq-dns" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.943871 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2e8489-181d-4c50-b9c5-484432e7e070" containerName="dnsmasq-dns" Nov 25 14:45:55 crc kubenswrapper[4796]: E1125 14:45:55.943895 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2e8489-181d-4c50-b9c5-484432e7e070" containerName="init" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.943901 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2e8489-181d-4c50-b9c5-484432e7e070" containerName="init" Nov 25 14:45:55 crc kubenswrapper[4796]: E1125 14:45:55.943908 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e99260e-8b90-4cd0-8417-8dc3c142a743" containerName="placement-db-sync" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.943914 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e99260e-8b90-4cd0-8417-8dc3c142a743" containerName="placement-db-sync" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.944470 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e99260e-8b90-4cd0-8417-8dc3c142a743" containerName="placement-db-sync" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.944497 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2e8489-181d-4c50-b9c5-484432e7e070" containerName="dnsmasq-dns" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.945410 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.947352 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rbpts" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.947727 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.947779 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.947855 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.950083 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 14:45:55 crc kubenswrapper[4796]: I1125 14:45:55.957117 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79bd96dcd6-f2n5f"] Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.045415 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/970dd58d-4266-4a39-9d8b-75190f4286bc-logs\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.045479 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z64j9\" (UniqueName: \"kubernetes.io/projected/970dd58d-4266-4a39-9d8b-75190f4286bc-kube-api-access-z64j9\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.045499 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-scripts\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.045639 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-internal-tls-certs\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.045719 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-config-data\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.045759 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-combined-ca-bundle\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.045833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-public-tls-certs\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.147386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/970dd58d-4266-4a39-9d8b-75190f4286bc-logs\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.147440 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z64j9\" (UniqueName: \"kubernetes.io/projected/970dd58d-4266-4a39-9d8b-75190f4286bc-kube-api-access-z64j9\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.147458 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-scripts\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.147910 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/970dd58d-4266-4a39-9d8b-75190f4286bc-logs\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.148176 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-internal-tls-certs\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.148288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-config-data\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.148329 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-combined-ca-bundle\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.148425 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-public-tls-certs\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.151211 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-scripts\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.153202 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-internal-tls-certs\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.156542 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-combined-ca-bundle\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.156788 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-config-data\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.163428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dd58d-4266-4a39-9d8b-75190f4286bc-public-tls-certs\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.166277 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z64j9\" (UniqueName: \"kubernetes.io/projected/970dd58d-4266-4a39-9d8b-75190f4286bc-kube-api-access-z64j9\") pod \"placement-79bd96dcd6-f2n5f\" (UID: \"970dd58d-4266-4a39-9d8b-75190f4286bc\") " pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.276536 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"995ed35a-afd4-48d3-af01-e35145fdf1f0","Type":"ContainerStarted","Data":"5beff934f399a69659913eab987e79d284d65bbc196681ff4103d40c3afe3bda"} Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.278317 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5d994c97d7-9qxnr" event={"ID":"47119c19-fca4-4a63-8170-d4dee8201af8","Type":"ContainerStarted","Data":"f744ac2dcf5f9aaf1048994ef4840e576492afb2d03bc53d26d949e50cdae65e"} Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.278490 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.279821 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.288261 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33a4eb54-4ef7-4290-9030-d957632b40c0","Type":"ContainerStarted","Data":"5ceffb5ecf4222a555d788541c1f8317d254410c6f2305d650c243a46f998732"} Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.288315 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33a4eb54-4ef7-4290-9030-d957632b40c0","Type":"ContainerStarted","Data":"ef57ac5a3abb6a5c98f7993a739503b0942e30d189e9515f5d47d64e17183d86"} Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.294754 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerStarted","Data":"deb3816973ceddbd56c1daae3aa482fb1b26e94d10c408e57e2148d0dbdcbccd"} Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.310284 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5d994c97d7-9qxnr" podStartSLOduration=3.310262867 podStartE2EDuration="3.310262867s" podCreationTimestamp="2025-11-25 14:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:56.298118909 +0000 UTC m=+1284.641228353" watchObservedRunningTime="2025-11-25 14:45:56.310262867 +0000 UTC m=+1284.653372301" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.426376 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa2e8489-181d-4c50-b9c5-484432e7e070" path="/var/lib/kubelet/pods/fa2e8489-181d-4c50-b9c5-484432e7e070/volumes" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.761448 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.818741 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79bd96dcd6-f2n5f"] Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.860729 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-config\") pod \"182a7451-724e-4649-a911-f26535ec04f9\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.860914 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5qzl\" (UniqueName: \"kubernetes.io/projected/182a7451-724e-4649-a911-f26535ec04f9-kube-api-access-x5qzl\") pod \"182a7451-724e-4649-a911-f26535ec04f9\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.860962 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-combined-ca-bundle\") pod \"182a7451-724e-4649-a911-f26535ec04f9\" (UID: \"182a7451-724e-4649-a911-f26535ec04f9\") " Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.864336 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/182a7451-724e-4649-a911-f26535ec04f9-kube-api-access-x5qzl" (OuterVolumeSpecName: "kube-api-access-x5qzl") pod "182a7451-724e-4649-a911-f26535ec04f9" (UID: "182a7451-724e-4649-a911-f26535ec04f9"). InnerVolumeSpecName "kube-api-access-x5qzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.885698 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "182a7451-724e-4649-a911-f26535ec04f9" (UID: "182a7451-724e-4649-a911-f26535ec04f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.894205 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-config" (OuterVolumeSpecName: "config") pod "182a7451-724e-4649-a911-f26535ec04f9" (UID: "182a7451-724e-4649-a911-f26535ec04f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.964537 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.964639 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5qzl\" (UniqueName: \"kubernetes.io/projected/182a7451-724e-4649-a911-f26535ec04f9-kube-api-access-x5qzl\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:56 crc kubenswrapper[4796]: I1125 14:45:56.964656 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182a7451-724e-4649-a911-f26535ec04f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.319521 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8fxjq" event={"ID":"182a7451-724e-4649-a911-f26535ec04f9","Type":"ContainerDied","Data":"8b93e0722e2474e138a91a3dcdec2aa677bd60bfeb0bc4f8fdf777b3527f8503"} Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.319561 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b93e0722e2474e138a91a3dcdec2aa677bd60bfeb0bc4f8fdf777b3527f8503" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.319902 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8fxjq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.321561 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79bd96dcd6-f2n5f" event={"ID":"970dd58d-4266-4a39-9d8b-75190f4286bc","Type":"ContainerStarted","Data":"09d6da01bb1ced2093c184b24e79c4521e8ab8a88db7f8c094858c43ec41831a"} Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.321629 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79bd96dcd6-f2n5f" event={"ID":"970dd58d-4266-4a39-9d8b-75190f4286bc","Type":"ContainerStarted","Data":"3252b1efd84f8cd714b2e5464e5ddada1c8040934ad2090f9b05a6d76a3a1da0"} Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.324100 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"995ed35a-afd4-48d3-af01-e35145fdf1f0","Type":"ContainerStarted","Data":"1bdccd89deee39bf25f7bd73ea512b28b91363c0fa66e8bcb52c96e2b52eb6ab"} Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.331130 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33a4eb54-4ef7-4290-9030-d957632b40c0","Type":"ContainerStarted","Data":"44833391bffa7612ed487dfbef1eb8472fa974a41fa7e889c651b5b28baa05e1"} Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.362500 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.36247549 podStartE2EDuration="8.36247549s" podCreationTimestamp="2025-11-25 14:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:57.346337567 +0000 UTC m=+1285.689447001" watchObservedRunningTime="2025-11-25 14:45:57.36247549 +0000 UTC m=+1285.705584924" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.465028 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.465003207 podStartE2EDuration="7.465003207s" podCreationTimestamp="2025-11-25 14:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:57.448371898 +0000 UTC m=+1285.791481322" watchObservedRunningTime="2025-11-25 14:45:57.465003207 +0000 UTC m=+1285.808112641" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.506133 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wrgpq"] Nov 25 14:45:57 crc kubenswrapper[4796]: E1125 14:45:57.506490 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182a7451-724e-4649-a911-f26535ec04f9" containerName="neutron-db-sync" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.506505 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="182a7451-724e-4649-a911-f26535ec04f9" containerName="neutron-db-sync" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.506668 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="182a7451-724e-4649-a911-f26535ec04f9" containerName="neutron-db-sync" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.507486 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.542869 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wrgpq"] Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.564465 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f9cd6669d-kmwxf"] Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.566280 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.574114 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.574389 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.574675 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-p2rhf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.574889 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.615381 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f9cd6669d-kmwxf"] Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.623028 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-ovndb-tls-certs\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.623106 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-svc\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.623432 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.623512 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2w7c\" (UniqueName: \"kubernetes.io/projected/1962316d-f4d0-407d-8070-1b208432c8fa-kube-api-access-c2w7c\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.624161 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk92c\" (UniqueName: \"kubernetes.io/projected/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-kube-api-access-hk92c\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.624311 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-httpd-config\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.624436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.624472 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-combined-ca-bundle\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.624539 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.624636 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-config\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.624667 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-config\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.726645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-httpd-config\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.726760 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.726808 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-combined-ca-bundle\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.726857 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.726912 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-config\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.726938 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-config\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.726964 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-ovndb-tls-certs\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.726994 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-svc\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.727048 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.727082 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2w7c\" (UniqueName: \"kubernetes.io/projected/1962316d-f4d0-407d-8070-1b208432c8fa-kube-api-access-c2w7c\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.727111 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk92c\" (UniqueName: \"kubernetes.io/projected/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-kube-api-access-hk92c\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.728695 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.728908 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-config\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.729605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-svc\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.730312 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.731035 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.731414 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-httpd-config\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.733110 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-combined-ca-bundle\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.733425 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-config\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.741975 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-ovndb-tls-certs\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.751082 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk92c\" (UniqueName: \"kubernetes.io/projected/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-kube-api-access-hk92c\") pod \"neutron-5f9cd6669d-kmwxf\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.755457 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2w7c\" (UniqueName: \"kubernetes.io/projected/1962316d-f4d0-407d-8070-1b208432c8fa-kube-api-access-c2w7c\") pod \"dnsmasq-dns-55f844cf75-wrgpq\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.827996 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:45:57 crc kubenswrapper[4796]: I1125 14:45:57.886268 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:58 crc kubenswrapper[4796]: I1125 14:45:58.357968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79bd96dcd6-f2n5f" event={"ID":"970dd58d-4266-4a39-9d8b-75190f4286bc","Type":"ContainerStarted","Data":"fbc772bd919e5c7839418783dfb564e3a01d8e5b5b48f6d1494d263faa950029"} Nov 25 14:45:58 crc kubenswrapper[4796]: I1125 14:45:58.358748 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:58 crc kubenswrapper[4796]: I1125 14:45:58.358799 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:45:58 crc kubenswrapper[4796]: I1125 14:45:58.365151 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wrgpq"] Nov 25 14:45:58 crc kubenswrapper[4796]: W1125 14:45:58.371807 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1962316d_f4d0_407d_8070_1b208432c8fa.slice/crio-c02cea1eec179bea64f6c1c4d18bddda7ccb185bf8fe89a7e8364eb8bb300476 WatchSource:0}: Error finding container c02cea1eec179bea64f6c1c4d18bddda7ccb185bf8fe89a7e8364eb8bb300476: Status 404 returned error can't find the container with id c02cea1eec179bea64f6c1c4d18bddda7ccb185bf8fe89a7e8364eb8bb300476 Nov 25 14:45:58 crc kubenswrapper[4796]: I1125 14:45:58.396987 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-79bd96dcd6-f2n5f" podStartSLOduration=3.396965771 podStartE2EDuration="3.396965771s" podCreationTimestamp="2025-11-25 14:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:58.38155663 +0000 UTC m=+1286.724666054" watchObservedRunningTime="2025-11-25 14:45:58.396965771 +0000 UTC m=+1286.740075195" Nov 25 14:45:58 crc kubenswrapper[4796]: I1125 14:45:58.468174 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f9cd6669d-kmwxf"] Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.369953 4796 generic.go:334] "Generic (PLEG): container finished" podID="1962316d-f4d0-407d-8070-1b208432c8fa" containerID="5c5099efa7f1b459b86eccf9286bd56a211aceb253fb4a0c05a452842cd10f75" exitCode=0 Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.370046 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" event={"ID":"1962316d-f4d0-407d-8070-1b208432c8fa","Type":"ContainerDied","Data":"5c5099efa7f1b459b86eccf9286bd56a211aceb253fb4a0c05a452842cd10f75"} Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.370454 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" event={"ID":"1962316d-f4d0-407d-8070-1b208432c8fa","Type":"ContainerStarted","Data":"c02cea1eec179bea64f6c1c4d18bddda7ccb185bf8fe89a7e8364eb8bb300476"} Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.372223 4796 generic.go:334] "Generic (PLEG): container finished" podID="c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" containerID="2c63332215ffc1b35fcf28a45694b19de2296c9096f319e360944a2cfea88350" exitCode=0 Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.372287 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttn2n" event={"ID":"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2","Type":"ContainerDied","Data":"2c63332215ffc1b35fcf28a45694b19de2296c9096f319e360944a2cfea88350"} Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.376303 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9cd6669d-kmwxf" event={"ID":"d8d5b61c-f184-4963-ba9f-9cf698fd8e60","Type":"ContainerStarted","Data":"c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb"} Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.376341 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9cd6669d-kmwxf" event={"ID":"d8d5b61c-f184-4963-ba9f-9cf698fd8e60","Type":"ContainerStarted","Data":"01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9"} Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.376351 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9cd6669d-kmwxf" event={"ID":"d8d5b61c-f184-4963-ba9f-9cf698fd8e60","Type":"ContainerStarted","Data":"63ab74923ecd7fb3bfbe77b3039dd96ded550e3a72d848e79eff6991e690c9cd"} Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.376590 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.442877 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f9cd6669d-kmwxf" podStartSLOduration=2.442854566 podStartE2EDuration="2.442854566s" podCreationTimestamp="2025-11-25 14:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:45:59.428144318 +0000 UTC m=+1287.771253732" watchObservedRunningTime="2025-11-25 14:45:59.442854566 +0000 UTC m=+1287.785963990" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.804079 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b8d7f79d9-dhp4t"] Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.810155 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.812310 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.814794 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.831387 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b8d7f79d9-dhp4t"] Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.880598 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-internal-tls-certs\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.880689 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-combined-ca-bundle\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.880736 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-config\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.880938 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmh49\" (UniqueName: \"kubernetes.io/projected/d300f40d-3177-4832-9df9-b724d40b8622-kube-api-access-hmh49\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.881041 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-public-tls-certs\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.881097 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-httpd-config\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.881162 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-ovndb-tls-certs\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.902801 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.902845 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.952267 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.963123 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.982956 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-internal-tls-certs\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.983035 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-combined-ca-bundle\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.983087 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-config\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.983159 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmh49\" (UniqueName: \"kubernetes.io/projected/d300f40d-3177-4832-9df9-b724d40b8622-kube-api-access-hmh49\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.983192 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-public-tls-certs\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.983222 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-httpd-config\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.983260 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-ovndb-tls-certs\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.990556 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-public-tls-certs\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.991126 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-ovndb-tls-certs\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.993877 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-internal-tls-certs\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:45:59 crc kubenswrapper[4796]: I1125 14:45:59.995224 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-config\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.006279 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-httpd-config\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.007252 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d300f40d-3177-4832-9df9-b724d40b8622-combined-ca-bundle\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.021203 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmh49\" (UniqueName: \"kubernetes.io/projected/d300f40d-3177-4832-9df9-b724d40b8622-kube-api-access-hmh49\") pod \"neutron-7b8d7f79d9-dhp4t\" (UID: \"d300f40d-3177-4832-9df9-b724d40b8622\") " pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.131525 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.427099 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" event={"ID":"1962316d-f4d0-407d-8070-1b208432c8fa","Type":"ContainerStarted","Data":"9686ba6dfc1df1bdf453de78c87e3d0f7c971a027b71953541039fe8127fa4aa"} Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.427151 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.427165 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.668865 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.668921 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.728813 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.734936 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.811553 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b8d7f79d9-dhp4t"] Nov 25 14:46:00 crc kubenswrapper[4796]: I1125 14:46:00.978689 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.002337 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p9kt\" (UniqueName: \"kubernetes.io/projected/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-kube-api-access-7p9kt\") pod \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.003004 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-combined-ca-bundle\") pod \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.003103 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-db-sync-config-data\") pod \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\" (UID: \"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2\") " Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.008302 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" (UID: "c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.008458 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-kube-api-access-7p9kt" (OuterVolumeSpecName: "kube-api-access-7p9kt") pod "c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" (UID: "c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2"). InnerVolumeSpecName "kube-api-access-7p9kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.037086 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" (UID: "c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.104678 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.104900 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.104910 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p9kt\" (UniqueName: \"kubernetes.io/projected/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2-kube-api-access-7p9kt\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.444927 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b8d7f79d9-dhp4t" event={"ID":"d300f40d-3177-4832-9df9-b724d40b8622","Type":"ContainerStarted","Data":"127bec0148ba3c476c9f72ea759891dd19d5a75e9320fd39e1c69ca8eb340290"} Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.444970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b8d7f79d9-dhp4t" event={"ID":"d300f40d-3177-4832-9df9-b724d40b8622","Type":"ContainerStarted","Data":"cb0d6212e7f010cc161f001d95c65a4faa5c662931c1c156ca0118000caf5196"} Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.447641 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ttn2n" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.448342 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ttn2n" event={"ID":"c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2","Type":"ContainerDied","Data":"b613cf464a236546766f8092da4e2c4b310c0d16185a2b7772d07180768e4bf0"} Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.448398 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b613cf464a236546766f8092da4e2c4b310c0d16185a2b7772d07180768e4bf0" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.448418 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.450214 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.450273 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.630098 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" podStartSLOduration=4.630082753 podStartE2EDuration="4.630082753s" podCreationTimestamp="2025-11-25 14:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:01.474957476 +0000 UTC m=+1289.818066910" watchObservedRunningTime="2025-11-25 14:46:01.630082753 +0000 UTC m=+1289.973192177" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.637981 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-696c6c8f78-kwfxh"] Nov 25 14:46:01 crc kubenswrapper[4796]: E1125 14:46:01.638351 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" containerName="barbican-db-sync" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.638369 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" containerName="barbican-db-sync" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.638564 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" containerName="barbican-db-sync" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.639488 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.640712 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6c45s" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.643050 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.643269 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.705701 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-847768d9dc-hdkcj"] Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.707272 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.711216 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.735291 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-696c6c8f78-kwfxh"] Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.745931 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-847768d9dc-hdkcj"] Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818391 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w786\" (UniqueName: \"kubernetes.io/projected/c2ea5acd-889d-439f-9295-39424d08c923-kube-api-access-8w786\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818456 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e86788-aa18-413b-aaa7-f216ef8d4f2b-logs\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818522 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71e86788-aa18-413b-aaa7-f216ef8d4f2b-config-data-custom\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818547 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2ea5acd-889d-439f-9295-39424d08c923-config-data\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818587 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e86788-aa18-413b-aaa7-f216ef8d4f2b-combined-ca-bundle\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818606 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ea5acd-889d-439f-9295-39424d08c923-combined-ca-bundle\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e86788-aa18-413b-aaa7-f216ef8d4f2b-config-data\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818673 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2ea5acd-889d-439f-9295-39424d08c923-config-data-custom\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818853 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwl2v\" (UniqueName: \"kubernetes.io/projected/71e86788-aa18-413b-aaa7-f216ef8d4f2b-kube-api-access-vwl2v\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.818954 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2ea5acd-889d-439f-9295-39424d08c923-logs\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.881174 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wrgpq"] Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.920805 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ddlb4"] Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.922529 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.923969 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2ea5acd-889d-439f-9295-39424d08c923-logs\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924003 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w786\" (UniqueName: \"kubernetes.io/projected/c2ea5acd-889d-439f-9295-39424d08c923-kube-api-access-8w786\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924034 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e86788-aa18-413b-aaa7-f216ef8d4f2b-logs\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924090 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71e86788-aa18-413b-aaa7-f216ef8d4f2b-config-data-custom\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2ea5acd-889d-439f-9295-39424d08c923-config-data\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e86788-aa18-413b-aaa7-f216ef8d4f2b-combined-ca-bundle\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924149 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ea5acd-889d-439f-9295-39424d08c923-combined-ca-bundle\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924190 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e86788-aa18-413b-aaa7-f216ef8d4f2b-config-data\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2ea5acd-889d-439f-9295-39424d08c923-config-data-custom\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924252 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwl2v\" (UniqueName: \"kubernetes.io/projected/71e86788-aa18-413b-aaa7-f216ef8d4f2b-kube-api-access-vwl2v\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.924820 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2ea5acd-889d-439f-9295-39424d08c923-logs\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.925187 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71e86788-aa18-413b-aaa7-f216ef8d4f2b-logs\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.931726 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71e86788-aa18-413b-aaa7-f216ef8d4f2b-config-data-custom\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.937553 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ddlb4"] Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.948392 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2ea5acd-889d-439f-9295-39424d08c923-config-data-custom\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.949113 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71e86788-aa18-413b-aaa7-f216ef8d4f2b-config-data\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.949734 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71e86788-aa18-413b-aaa7-f216ef8d4f2b-combined-ca-bundle\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.952382 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ea5acd-889d-439f-9295-39424d08c923-combined-ca-bundle\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.952949 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w786\" (UniqueName: \"kubernetes.io/projected/c2ea5acd-889d-439f-9295-39424d08c923-kube-api-access-8w786\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.953411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2ea5acd-889d-439f-9295-39424d08c923-config-data\") pod \"barbican-worker-847768d9dc-hdkcj\" (UID: \"c2ea5acd-889d-439f-9295-39424d08c923\") " pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.960178 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwl2v\" (UniqueName: \"kubernetes.io/projected/71e86788-aa18-413b-aaa7-f216ef8d4f2b-kube-api-access-vwl2v\") pod \"barbican-keystone-listener-696c6c8f78-kwfxh\" (UID: \"71e86788-aa18-413b-aaa7-f216ef8d4f2b\") " pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.984252 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b8bdcff86-mhf8m"] Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.992344 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:01 crc kubenswrapper[4796]: I1125 14:46:01.994238 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.005314 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b8bdcff86-mhf8m"] Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.019367 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.025048 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.025185 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.025206 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.027402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-config\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.027440 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.027467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lklfg\" (UniqueName: \"kubernetes.io/projected/7399716c-47ba-4a11-81b0-f206c95855df-kube-api-access-lklfg\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.060945 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-847768d9dc-hdkcj" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133216 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133242 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133270 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-config\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133294 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133318 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lklfg\" (UniqueName: \"kubernetes.io/projected/7399716c-47ba-4a11-81b0-f206c95855df-kube-api-access-lklfg\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133363 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-combined-ca-bundle\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133412 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data-custom\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133444 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6t9w\" (UniqueName: \"kubernetes.io/projected/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-kube-api-access-d6t9w\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.133502 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-logs\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.134427 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.134607 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.135111 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.138718 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.145657 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-config\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.155661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lklfg\" (UniqueName: \"kubernetes.io/projected/7399716c-47ba-4a11-81b0-f206c95855df-kube-api-access-lklfg\") pod \"dnsmasq-dns-85ff748b95-ddlb4\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.234992 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data-custom\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.235076 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6t9w\" (UniqueName: \"kubernetes.io/projected/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-kube-api-access-d6t9w\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.235117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-logs\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.235191 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.235244 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-combined-ca-bundle\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.239603 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data-custom\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.240481 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-logs\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.241008 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-combined-ca-bundle\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.244070 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.259682 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6t9w\" (UniqueName: \"kubernetes.io/projected/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-kube-api-access-d6t9w\") pod \"barbican-api-6b8bdcff86-mhf8m\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.319101 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cd9956864-5xkx5" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.346151 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.379313 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.422751 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-674489f5b-nnl97" podUID="b8f52433-dd17-499e-8ac4-bda250a52460" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.453836 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:46:02 crc kubenswrapper[4796]: I1125 14:46:02.998241 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 14:46:03 crc kubenswrapper[4796]: I1125 14:46:03.459858 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:46:03 crc kubenswrapper[4796]: I1125 14:46:03.459889 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:46:03 crc kubenswrapper[4796]: I1125 14:46:03.460161 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:46:03 crc kubenswrapper[4796]: I1125 14:46:03.460055 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" podUID="1962316d-f4d0-407d-8070-1b208432c8fa" containerName="dnsmasq-dns" containerID="cri-o://9686ba6dfc1df1bdf453de78c87e3d0f7c971a027b71953541039fe8127fa4aa" gracePeriod=10 Nov 25 14:46:03 crc kubenswrapper[4796]: I1125 14:46:03.983539 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.006561 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.341064 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.495185 4796 generic.go:334] "Generic (PLEG): container finished" podID="1962316d-f4d0-407d-8070-1b208432c8fa" containerID="9686ba6dfc1df1bdf453de78c87e3d0f7c971a027b71953541039fe8127fa4aa" exitCode=0 Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.495224 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" event={"ID":"1962316d-f4d0-407d-8070-1b208432c8fa","Type":"ContainerDied","Data":"9686ba6dfc1df1bdf453de78c87e3d0f7c971a027b71953541039fe8127fa4aa"} Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.585831 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-648cbfbf74-5bhgn"] Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.587791 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.594256 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.594539 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.622445 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-648cbfbf74-5bhgn"] Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.681624 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31c41f3-602c-427d-8728-9368c92a8d35-logs\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.681818 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-public-tls-certs\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.681840 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-combined-ca-bundle\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.681857 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-config-data-custom\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.681903 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-internal-tls-certs\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.682020 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-config-data\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.682078 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v8zt\" (UniqueName: \"kubernetes.io/projected/f31c41f3-602c-427d-8728-9368c92a8d35-kube-api-access-2v8zt\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.783273 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-config-data\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.783357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v8zt\" (UniqueName: \"kubernetes.io/projected/f31c41f3-602c-427d-8728-9368c92a8d35-kube-api-access-2v8zt\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.783407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31c41f3-602c-427d-8728-9368c92a8d35-logs\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.783515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-public-tls-certs\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.783536 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-combined-ca-bundle\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.783555 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-config-data-custom\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.783606 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-internal-tls-certs\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.784168 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f31c41f3-602c-427d-8728-9368c92a8d35-logs\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.789306 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-public-tls-certs\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.789695 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-combined-ca-bundle\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.790602 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-config-data\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.791800 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-config-data-custom\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.799559 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v8zt\" (UniqueName: \"kubernetes.io/projected/f31c41f3-602c-427d-8728-9368c92a8d35-kube-api-access-2v8zt\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.799660 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f31c41f3-602c-427d-8728-9368c92a8d35-internal-tls-certs\") pod \"barbican-api-648cbfbf74-5bhgn\" (UID: \"f31c41f3-602c-427d-8728-9368c92a8d35\") " pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:04 crc kubenswrapper[4796]: I1125 14:46:04.916601 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.784178 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.850050 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-sb\") pod \"1962316d-f4d0-407d-8070-1b208432c8fa\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.850147 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-svc\") pod \"1962316d-f4d0-407d-8070-1b208432c8fa\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.850171 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-config\") pod \"1962316d-f4d0-407d-8070-1b208432c8fa\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.850222 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2w7c\" (UniqueName: \"kubernetes.io/projected/1962316d-f4d0-407d-8070-1b208432c8fa-kube-api-access-c2w7c\") pod \"1962316d-f4d0-407d-8070-1b208432c8fa\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.850256 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-swift-storage-0\") pod \"1962316d-f4d0-407d-8070-1b208432c8fa\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.850382 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-nb\") pod \"1962316d-f4d0-407d-8070-1b208432c8fa\" (UID: \"1962316d-f4d0-407d-8070-1b208432c8fa\") " Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.859064 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1962316d-f4d0-407d-8070-1b208432c8fa-kube-api-access-c2w7c" (OuterVolumeSpecName: "kube-api-access-c2w7c") pod "1962316d-f4d0-407d-8070-1b208432c8fa" (UID: "1962316d-f4d0-407d-8070-1b208432c8fa"). InnerVolumeSpecName "kube-api-access-c2w7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.894283 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-config" (OuterVolumeSpecName: "config") pod "1962316d-f4d0-407d-8070-1b208432c8fa" (UID: "1962316d-f4d0-407d-8070-1b208432c8fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.904839 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1962316d-f4d0-407d-8070-1b208432c8fa" (UID: "1962316d-f4d0-407d-8070-1b208432c8fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.908871 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1962316d-f4d0-407d-8070-1b208432c8fa" (UID: "1962316d-f4d0-407d-8070-1b208432c8fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.934289 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1962316d-f4d0-407d-8070-1b208432c8fa" (UID: "1962316d-f4d0-407d-8070-1b208432c8fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.954762 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.954791 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.954800 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.954811 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2w7c\" (UniqueName: \"kubernetes.io/projected/1962316d-f4d0-407d-8070-1b208432c8fa-kube-api-access-c2w7c\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.954821 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:07 crc kubenswrapper[4796]: I1125 14:46:07.997236 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1962316d-f4d0-407d-8070-1b208432c8fa" (UID: "1962316d-f4d0-407d-8070-1b208432c8fa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.066031 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1962316d-f4d0-407d-8070-1b208432c8fa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.365984 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-648cbfbf74-5bhgn"] Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.495256 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-847768d9dc-hdkcj"] Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.505963 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-696c6c8f78-kwfxh"] Nov 25 14:46:08 crc kubenswrapper[4796]: W1125 14:46:08.513467 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71e86788_aa18_413b_aaa7_f216ef8d4f2b.slice/crio-c264e427dabfdf8aee43d4fc87ab6019e32de7a9dced4a3c5f7f1bfd184d9714 WatchSource:0}: Error finding container c264e427dabfdf8aee43d4fc87ab6019e32de7a9dced4a3c5f7f1bfd184d9714: Status 404 returned error can't find the container with id c264e427dabfdf8aee43d4fc87ab6019e32de7a9dced4a3c5f7f1bfd184d9714 Nov 25 14:46:08 crc kubenswrapper[4796]: W1125 14:46:08.513946 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2ea5acd_889d_439f_9295_39424d08c923.slice/crio-6c6d669e0b118f3f66a01d2befd2748b6983a7d6791f5a2007818225790ed539 WatchSource:0}: Error finding container 6c6d669e0b118f3f66a01d2befd2748b6983a7d6791f5a2007818225790ed539: Status 404 returned error can't find the container with id 6c6d669e0b118f3f66a01d2befd2748b6983a7d6791f5a2007818225790ed539 Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.560011 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-847768d9dc-hdkcj" event={"ID":"c2ea5acd-889d-439f-9295-39424d08c923","Type":"ContainerStarted","Data":"6c6d669e0b118f3f66a01d2befd2748b6983a7d6791f5a2007818225790ed539"} Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.561265 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" event={"ID":"71e86788-aa18-413b-aaa7-f216ef8d4f2b","Type":"ContainerStarted","Data":"c264e427dabfdf8aee43d4fc87ab6019e32de7a9dced4a3c5f7f1bfd184d9714"} Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.564970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b8d7f79d9-dhp4t" event={"ID":"d300f40d-3177-4832-9df9-b724d40b8622","Type":"ContainerStarted","Data":"1f1874827fb630e1fb74ec89ab6a36c1c4cb4a92cea3182dedcc589fe9b3c331"} Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.565117 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.568426 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerStarted","Data":"0a26dbb305d039aea443bc352fa86884b78256fbef6d3799b6b0e1f19c2dfba7"} Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.568536 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="ceilometer-central-agent" containerID="cri-o://7c729dcdefc39f9939dd17140f8926e4086c5e4db268a5be72f078d7030e32a6" gracePeriod=30 Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.568627 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.568640 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="ceilometer-notification-agent" containerID="cri-o://51e66da980cc6d21137e7e19efabd2613836b2e47f8a568239f4ef10a6c89ff7" gracePeriod=30 Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.568682 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="proxy-httpd" containerID="cri-o://0a26dbb305d039aea443bc352fa86884b78256fbef6d3799b6b0e1f19c2dfba7" gracePeriod=30 Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.568630 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="sg-core" containerID="cri-o://deb3816973ceddbd56c1daae3aa482fb1b26e94d10c408e57e2148d0dbdcbccd" gracePeriod=30 Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.574556 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-648cbfbf74-5bhgn" event={"ID":"f31c41f3-602c-427d-8728-9368c92a8d35","Type":"ContainerStarted","Data":"a5a3b1e2e5206552940a9c38d8dcd5876bd43871169499192495720393a8cc46"} Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.578701 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" event={"ID":"1962316d-f4d0-407d-8070-1b208432c8fa","Type":"ContainerDied","Data":"c02cea1eec179bea64f6c1c4d18bddda7ccb185bf8fe89a7e8364eb8bb300476"} Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.578753 4796 scope.go:117] "RemoveContainer" containerID="9686ba6dfc1df1bdf453de78c87e3d0f7c971a027b71953541039fe8127fa4aa" Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.578782 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-wrgpq" Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.624863 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b8d7f79d9-dhp4t" podStartSLOduration=9.624837643 podStartE2EDuration="9.624837643s" podCreationTimestamp="2025-11-25 14:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:08.598880915 +0000 UTC m=+1296.941990339" watchObservedRunningTime="2025-11-25 14:46:08.624837643 +0000 UTC m=+1296.967947067" Nov 25 14:46:08 crc kubenswrapper[4796]: W1125 14:46:08.644204 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7399716c_47ba_4a11_81b0_f206c95855df.slice/crio-c7e75af85b1a339d5a8e03028ede166c4a56861d15721cd2270da93a2e1b5d70 WatchSource:0}: Error finding container c7e75af85b1a339d5a8e03028ede166c4a56861d15721cd2270da93a2e1b5d70: Status 404 returned error can't find the container with id c7e75af85b1a339d5a8e03028ede166c4a56861d15721cd2270da93a2e1b5d70 Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.650365 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wrgpq"] Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.660944 4796 scope.go:117] "RemoveContainer" containerID="5c5099efa7f1b459b86eccf9286bd56a211aceb253fb4a0c05a452842cd10f75" Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.660956 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wrgpq"] Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.668172 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ddlb4"] Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.678113 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b8bdcff86-mhf8m"] Nov 25 14:46:08 crc kubenswrapper[4796]: I1125 14:46:08.684682 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.534384572 podStartE2EDuration="55.684662299s" podCreationTimestamp="2025-11-25 14:45:13 +0000 UTC" firstStartedPulling="2025-11-25 14:45:14.845736893 +0000 UTC m=+1243.188846317" lastFinishedPulling="2025-11-25 14:46:07.99601462 +0000 UTC m=+1296.339124044" observedRunningTime="2025-11-25 14:46:08.637813158 +0000 UTC m=+1296.980922592" watchObservedRunningTime="2025-11-25 14:46:08.684662299 +0000 UTC m=+1297.027771723" Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.595923 4796 generic.go:334] "Generic (PLEG): container finished" podID="7399716c-47ba-4a11-81b0-f206c95855df" containerID="c4548c58be93c87fce86465b9a44e96e0ffdf4db70952ba6e83fe1bb8bd3ed02" exitCode=0 Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.596421 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" event={"ID":"7399716c-47ba-4a11-81b0-f206c95855df","Type":"ContainerDied","Data":"c4548c58be93c87fce86465b9a44e96e0ffdf4db70952ba6e83fe1bb8bd3ed02"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.596651 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" event={"ID":"7399716c-47ba-4a11-81b0-f206c95855df","Type":"ContainerStarted","Data":"c7e75af85b1a339d5a8e03028ede166c4a56861d15721cd2270da93a2e1b5d70"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.603701 4796 generic.go:334] "Generic (PLEG): container finished" podID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerID="0a26dbb305d039aea443bc352fa86884b78256fbef6d3799b6b0e1f19c2dfba7" exitCode=0 Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.603754 4796 generic.go:334] "Generic (PLEG): container finished" podID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerID="deb3816973ceddbd56c1daae3aa482fb1b26e94d10c408e57e2148d0dbdcbccd" exitCode=2 Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.603767 4796 generic.go:334] "Generic (PLEG): container finished" podID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerID="51e66da980cc6d21137e7e19efabd2613836b2e47f8a568239f4ef10a6c89ff7" exitCode=0 Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.603779 4796 generic.go:334] "Generic (PLEG): container finished" podID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerID="7c729dcdefc39f9939dd17140f8926e4086c5e4db268a5be72f078d7030e32a6" exitCode=0 Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.603872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerDied","Data":"0a26dbb305d039aea443bc352fa86884b78256fbef6d3799b6b0e1f19c2dfba7"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.603914 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerDied","Data":"deb3816973ceddbd56c1daae3aa482fb1b26e94d10c408e57e2148d0dbdcbccd"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.603929 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerDied","Data":"51e66da980cc6d21137e7e19efabd2613836b2e47f8a568239f4ef10a6c89ff7"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.603941 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerDied","Data":"7c729dcdefc39f9939dd17140f8926e4086c5e4db268a5be72f078d7030e32a6"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.628980 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-648cbfbf74-5bhgn" event={"ID":"f31c41f3-602c-427d-8728-9368c92a8d35","Type":"ContainerStarted","Data":"6ef1edcbb841979509c18ebaa21498ce65ef9ec5086d546b93432ee14ea1d9fe"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.629346 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-648cbfbf74-5bhgn" event={"ID":"f31c41f3-602c-427d-8728-9368c92a8d35","Type":"ContainerStarted","Data":"910b9dc86a41fdda1796424a9df89c42f7921dd0e089bfc6fe03a44274ee470a"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.630460 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.630491 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.644014 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tt8qv" event={"ID":"b0493d28-3276-4a85-a800-4d0b1576c407","Type":"ContainerStarted","Data":"463903737e97d6d64a9615a13c2ff45e3d614f97444e82a5efc8c97e7c7b0161"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.665465 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-648cbfbf74-5bhgn" podStartSLOduration=5.665448795 podStartE2EDuration="5.665448795s" podCreationTimestamp="2025-11-25 14:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:09.659039265 +0000 UTC m=+1298.002148689" watchObservedRunningTime="2025-11-25 14:46:09.665448795 +0000 UTC m=+1298.008558219" Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.681202 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tt8qv" podStartSLOduration=3.927795815 podStartE2EDuration="57.681187345s" podCreationTimestamp="2025-11-25 14:45:12 +0000 UTC" firstStartedPulling="2025-11-25 14:45:14.260303852 +0000 UTC m=+1242.603413276" lastFinishedPulling="2025-11-25 14:46:08.013695382 +0000 UTC m=+1296.356804806" observedRunningTime="2025-11-25 14:46:09.679805233 +0000 UTC m=+1298.022914657" watchObservedRunningTime="2025-11-25 14:46:09.681187345 +0000 UTC m=+1298.024296769" Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.684123 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8bdcff86-mhf8m" event={"ID":"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b","Type":"ContainerStarted","Data":"831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.684216 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.684235 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8bdcff86-mhf8m" event={"ID":"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b","Type":"ContainerStarted","Data":"3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.684247 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8bdcff86-mhf8m" event={"ID":"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b","Type":"ContainerStarted","Data":"7bda8ede50ec466dbe97f1e4f83a396e7c4b5af2d8b04639d0f4afcf834c0f93"} Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.684298 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:09 crc kubenswrapper[4796]: I1125 14:46:09.734964 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b8bdcff86-mhf8m" podStartSLOduration=8.734934861 podStartE2EDuration="8.734934861s" podCreationTimestamp="2025-11-25 14:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:09.709898741 +0000 UTC m=+1298.053008175" watchObservedRunningTime="2025-11-25 14:46:09.734934861 +0000 UTC m=+1298.078044285" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.154038 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.211019 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5l42\" (UniqueName: \"kubernetes.io/projected/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-kube-api-access-v5l42\") pod \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.211099 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-combined-ca-bundle\") pod \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.211152 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-log-httpd\") pod \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.211173 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-run-httpd\") pod \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.211233 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-config-data\") pod \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.211440 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-sg-core-conf-yaml\") pod \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.211483 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-scripts\") pod \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\" (UID: \"395c0fb7-8e73-4c01-a5fb-6b17af27e57d\") " Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.213145 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "395c0fb7-8e73-4c01-a5fb-6b17af27e57d" (UID: "395c0fb7-8e73-4c01-a5fb-6b17af27e57d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.213169 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "395c0fb7-8e73-4c01-a5fb-6b17af27e57d" (UID: "395c0fb7-8e73-4c01-a5fb-6b17af27e57d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.229882 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-scripts" (OuterVolumeSpecName: "scripts") pod "395c0fb7-8e73-4c01-a5fb-6b17af27e57d" (UID: "395c0fb7-8e73-4c01-a5fb-6b17af27e57d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.234643 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-kube-api-access-v5l42" (OuterVolumeSpecName: "kube-api-access-v5l42") pod "395c0fb7-8e73-4c01-a5fb-6b17af27e57d" (UID: "395c0fb7-8e73-4c01-a5fb-6b17af27e57d"). InnerVolumeSpecName "kube-api-access-v5l42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.254338 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "395c0fb7-8e73-4c01-a5fb-6b17af27e57d" (UID: "395c0fb7-8e73-4c01-a5fb-6b17af27e57d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.282738 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "395c0fb7-8e73-4c01-a5fb-6b17af27e57d" (UID: "395c0fb7-8e73-4c01-a5fb-6b17af27e57d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.315152 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.315177 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.315188 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5l42\" (UniqueName: \"kubernetes.io/projected/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-kube-api-access-v5l42\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.315205 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.315217 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.315227 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.332709 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-config-data" (OuterVolumeSpecName: "config-data") pod "395c0fb7-8e73-4c01-a5fb-6b17af27e57d" (UID: "395c0fb7-8e73-4c01-a5fb-6b17af27e57d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.417740 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395c0fb7-8e73-4c01-a5fb-6b17af27e57d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.421288 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1962316d-f4d0-407d-8070-1b208432c8fa" path="/var/lib/kubelet/pods/1962316d-f4d0-407d-8070-1b208432c8fa/volumes" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.692307 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.692368 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"395c0fb7-8e73-4c01-a5fb-6b17af27e57d","Type":"ContainerDied","Data":"4c448645f752cf853a5c3cbc16b34368d5d6d69fbdac9ac1928e989811d7408a"} Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.692407 4796 scope.go:117] "RemoveContainer" containerID="0a26dbb305d039aea443bc352fa86884b78256fbef6d3799b6b0e1f19c2dfba7" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.751838 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.765917 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.777135 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:10 crc kubenswrapper[4796]: E1125 14:46:10.777625 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="ceilometer-notification-agent" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.777643 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="ceilometer-notification-agent" Nov 25 14:46:10 crc kubenswrapper[4796]: E1125 14:46:10.777670 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="proxy-httpd" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.777679 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="proxy-httpd" Nov 25 14:46:10 crc kubenswrapper[4796]: E1125 14:46:10.777694 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="sg-core" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.777702 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="sg-core" Nov 25 14:46:10 crc kubenswrapper[4796]: E1125 14:46:10.777919 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1962316d-f4d0-407d-8070-1b208432c8fa" containerName="init" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.777931 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1962316d-f4d0-407d-8070-1b208432c8fa" containerName="init" Nov 25 14:46:10 crc kubenswrapper[4796]: E1125 14:46:10.778288 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="ceilometer-central-agent" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.778302 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="ceilometer-central-agent" Nov 25 14:46:10 crc kubenswrapper[4796]: E1125 14:46:10.778316 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1962316d-f4d0-407d-8070-1b208432c8fa" containerName="dnsmasq-dns" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.778325 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1962316d-f4d0-407d-8070-1b208432c8fa" containerName="dnsmasq-dns" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.778719 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1962316d-f4d0-407d-8070-1b208432c8fa" containerName="dnsmasq-dns" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.778740 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="sg-core" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.778754 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="proxy-httpd" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.778763 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="ceilometer-notification-agent" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.778796 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" containerName="ceilometer-central-agent" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.782676 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.785553 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.785825 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.789728 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.926623 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.927022 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-run-httpd\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.927289 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-config-data\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.927513 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.927736 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cppk\" (UniqueName: \"kubernetes.io/projected/c801ed15-ca44-4901-a402-93224e1e73b6-kube-api-access-8cppk\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.927928 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-scripts\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:10 crc kubenswrapper[4796]: I1125 14:46:10.928084 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-log-httpd\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.030362 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.031021 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cppk\" (UniqueName: \"kubernetes.io/projected/c801ed15-ca44-4901-a402-93224e1e73b6-kube-api-access-8cppk\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.031221 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-scripts\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.031399 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-log-httpd\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.031614 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.031769 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-run-httpd\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.031983 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-config-data\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.032254 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-log-httpd\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.032494 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-run-httpd\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.036560 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.037033 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-config-data\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.037151 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.039008 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-scripts\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.052984 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cppk\" (UniqueName: \"kubernetes.io/projected/c801ed15-ca44-4901-a402-93224e1e73b6-kube-api-access-8cppk\") pod \"ceilometer-0\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.106391 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.622484 4796 scope.go:117] "RemoveContainer" containerID="deb3816973ceddbd56c1daae3aa482fb1b26e94d10c408e57e2148d0dbdcbccd" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.663413 4796 scope.go:117] "RemoveContainer" containerID="51e66da980cc6d21137e7e19efabd2613836b2e47f8a568239f4ef10a6c89ff7" Nov 25 14:46:11 crc kubenswrapper[4796]: I1125 14:46:11.840933 4796 scope.go:117] "RemoveContainer" containerID="7c729dcdefc39f9939dd17140f8926e4086c5e4db268a5be72f078d7030e32a6" Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.158143 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:12 crc kubenswrapper[4796]: W1125 14:46:12.162898 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc801ed15_ca44_4901_a402_93224e1e73b6.slice/crio-ae7e7720d5b5c010171f4df9e4465d725dd0afbb2bd35159242ae16e79391f6e WatchSource:0}: Error finding container ae7e7720d5b5c010171f4df9e4465d725dd0afbb2bd35159242ae16e79391f6e: Status 404 returned error can't find the container with id ae7e7720d5b5c010171f4df9e4465d725dd0afbb2bd35159242ae16e79391f6e Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.429470 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395c0fb7-8e73-4c01-a5fb-6b17af27e57d" path="/var/lib/kubelet/pods/395c0fb7-8e73-4c01-a5fb-6b17af27e57d/volumes" Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.719753 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-847768d9dc-hdkcj" event={"ID":"c2ea5acd-889d-439f-9295-39424d08c923","Type":"ContainerStarted","Data":"6847b047328a8afd19ceded1c90679ecc904180bf28dc9407240ce8c852a4238"} Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.720145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-847768d9dc-hdkcj" event={"ID":"c2ea5acd-889d-439f-9295-39424d08c923","Type":"ContainerStarted","Data":"420f1151b8d3f73e443941011f11b8820e9f0ef474d2a2f64ceabb5e862084dc"} Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.725689 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" event={"ID":"71e86788-aa18-413b-aaa7-f216ef8d4f2b","Type":"ContainerStarted","Data":"21e1981dd4e5799a5ec57c27799ea745c7942d93bcb27da17d18f9555753417e"} Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.725734 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" event={"ID":"71e86788-aa18-413b-aaa7-f216ef8d4f2b","Type":"ContainerStarted","Data":"0a0e09a7335c1d8d2505fca7cd48d38c53e354474d5e9e268ce2e495ab4951e4"} Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.728827 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerStarted","Data":"ae7e7720d5b5c010171f4df9e4465d725dd0afbb2bd35159242ae16e79391f6e"} Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.731247 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" event={"ID":"7399716c-47ba-4a11-81b0-f206c95855df","Type":"ContainerStarted","Data":"af003b11ed836695a7ec33f6dd33bdce86d25aac9629ff36547fbf1378a9ca96"} Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.731298 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.743086 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-847768d9dc-hdkcj" podStartSLOduration=8.586890815 podStartE2EDuration="11.743063479s" podCreationTimestamp="2025-11-25 14:46:01 +0000 UTC" firstStartedPulling="2025-11-25 14:46:08.516668161 +0000 UTC m=+1296.859777585" lastFinishedPulling="2025-11-25 14:46:11.672840825 +0000 UTC m=+1300.015950249" observedRunningTime="2025-11-25 14:46:12.738338702 +0000 UTC m=+1301.081448146" watchObservedRunningTime="2025-11-25 14:46:12.743063479 +0000 UTC m=+1301.086172903" Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.763339 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" podStartSLOduration=11.763320791 podStartE2EDuration="11.763320791s" podCreationTimestamp="2025-11-25 14:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:12.760939717 +0000 UTC m=+1301.104049141" watchObservedRunningTime="2025-11-25 14:46:12.763320791 +0000 UTC m=+1301.106430215" Nov 25 14:46:12 crc kubenswrapper[4796]: I1125 14:46:12.798438 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-696c6c8f78-kwfxh" podStartSLOduration=8.640135696 podStartE2EDuration="11.798417225s" podCreationTimestamp="2025-11-25 14:46:01 +0000 UTC" firstStartedPulling="2025-11-25 14:46:08.515733982 +0000 UTC m=+1296.858843396" lastFinishedPulling="2025-11-25 14:46:11.674015501 +0000 UTC m=+1300.017124925" observedRunningTime="2025-11-25 14:46:12.789943051 +0000 UTC m=+1301.133052475" watchObservedRunningTime="2025-11-25 14:46:12.798417225 +0000 UTC m=+1301.141526649" Nov 25 14:46:13 crc kubenswrapper[4796]: I1125 14:46:13.750847 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerStarted","Data":"4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf"} Nov 25 14:46:13 crc kubenswrapper[4796]: I1125 14:46:13.751134 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerStarted","Data":"168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740"} Nov 25 14:46:14 crc kubenswrapper[4796]: I1125 14:46:14.261864 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:46:14 crc kubenswrapper[4796]: I1125 14:46:14.366617 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:46:14 crc kubenswrapper[4796]: I1125 14:46:14.763765 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerStarted","Data":"59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f"} Nov 25 14:46:15 crc kubenswrapper[4796]: I1125 14:46:15.773343 4796 generic.go:334] "Generic (PLEG): container finished" podID="b0493d28-3276-4a85-a800-4d0b1576c407" containerID="463903737e97d6d64a9615a13c2ff45e3d614f97444e82a5efc8c97e7c7b0161" exitCode=0 Nov 25 14:46:15 crc kubenswrapper[4796]: I1125 14:46:15.773419 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tt8qv" event={"ID":"b0493d28-3276-4a85-a800-4d0b1576c407","Type":"ContainerDied","Data":"463903737e97d6d64a9615a13c2ff45e3d614f97444e82a5efc8c97e7c7b0161"} Nov 25 14:46:15 crc kubenswrapper[4796]: I1125 14:46:15.883219 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:46:15 crc kubenswrapper[4796]: I1125 14:46:15.963277 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-674489f5b-nnl97" Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.021661 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cd9956864-5xkx5"] Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.361994 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.366250 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-648cbfbf74-5bhgn" Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.484498 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b8bdcff86-mhf8m"] Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.484791 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b8bdcff86-mhf8m" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api-log" containerID="cri-o://3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275" gracePeriod=30 Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.486224 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b8bdcff86-mhf8m" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api" containerID="cri-o://831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9" gracePeriod=30 Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.493040 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b8bdcff86-mhf8m" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": EOF" Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.493214 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b8bdcff86-mhf8m" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": EOF" Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.784411 4796 generic.go:334] "Generic (PLEG): container finished" podID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerID="3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275" exitCode=143 Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.784491 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8bdcff86-mhf8m" event={"ID":"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b","Type":"ContainerDied","Data":"3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275"} Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.787981 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cd9956864-5xkx5" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon-log" containerID="cri-o://ae453e3aaa7cbba30fd5bc3de23897fd5dc332bf3b291917085c8ce4126081c4" gracePeriod=30 Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.788426 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7cd9956864-5xkx5" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon" containerID="cri-o://11b12a44fb12af68b288b537b801c1396328386b617bea8abdea96586dfce0b0" gracePeriod=30 Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.788445 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerStarted","Data":"0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028"} Nov 25 14:46:16 crc kubenswrapper[4796]: I1125 14:46:16.825631 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.4505236950000002 podStartE2EDuration="6.825611443s" podCreationTimestamp="2025-11-25 14:46:10 +0000 UTC" firstStartedPulling="2025-11-25 14:46:12.165024309 +0000 UTC m=+1300.508133733" lastFinishedPulling="2025-11-25 14:46:15.540112057 +0000 UTC m=+1303.883221481" observedRunningTime="2025-11-25 14:46:16.820293977 +0000 UTC m=+1305.163403401" watchObservedRunningTime="2025-11-25 14:46:16.825611443 +0000 UTC m=+1305.168720867" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.179821 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.286627 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-db-sync-config-data\") pod \"b0493d28-3276-4a85-a800-4d0b1576c407\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.286754 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-config-data\") pod \"b0493d28-3276-4a85-a800-4d0b1576c407\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.286862 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-combined-ca-bundle\") pod \"b0493d28-3276-4a85-a800-4d0b1576c407\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.286886 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-scripts\") pod \"b0493d28-3276-4a85-a800-4d0b1576c407\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.286917 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0493d28-3276-4a85-a800-4d0b1576c407-etc-machine-id\") pod \"b0493d28-3276-4a85-a800-4d0b1576c407\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.286955 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6fmx\" (UniqueName: \"kubernetes.io/projected/b0493d28-3276-4a85-a800-4d0b1576c407-kube-api-access-c6fmx\") pod \"b0493d28-3276-4a85-a800-4d0b1576c407\" (UID: \"b0493d28-3276-4a85-a800-4d0b1576c407\") " Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.287428 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0493d28-3276-4a85-a800-4d0b1576c407-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b0493d28-3276-4a85-a800-4d0b1576c407" (UID: "b0493d28-3276-4a85-a800-4d0b1576c407"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.292889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0493d28-3276-4a85-a800-4d0b1576c407-kube-api-access-c6fmx" (OuterVolumeSpecName: "kube-api-access-c6fmx") pod "b0493d28-3276-4a85-a800-4d0b1576c407" (UID: "b0493d28-3276-4a85-a800-4d0b1576c407"). InnerVolumeSpecName "kube-api-access-c6fmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.293722 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-scripts" (OuterVolumeSpecName: "scripts") pod "b0493d28-3276-4a85-a800-4d0b1576c407" (UID: "b0493d28-3276-4a85-a800-4d0b1576c407"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.304655 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b0493d28-3276-4a85-a800-4d0b1576c407" (UID: "b0493d28-3276-4a85-a800-4d0b1576c407"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.337618 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-config-data" (OuterVolumeSpecName: "config-data") pod "b0493d28-3276-4a85-a800-4d0b1576c407" (UID: "b0493d28-3276-4a85-a800-4d0b1576c407"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.341757 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0493d28-3276-4a85-a800-4d0b1576c407" (UID: "b0493d28-3276-4a85-a800-4d0b1576c407"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.350749 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.389052 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.389088 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.389100 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.389108 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0493d28-3276-4a85-a800-4d0b1576c407-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.389118 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6fmx\" (UniqueName: \"kubernetes.io/projected/b0493d28-3276-4a85-a800-4d0b1576c407-kube-api-access-c6fmx\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.389128 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b0493d28-3276-4a85-a800-4d0b1576c407-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.430934 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d58n6"] Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.431204 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" podUID="64d653a7-a2c8-439a-9b7c-733682c79eeb" containerName="dnsmasq-dns" containerID="cri-o://9ca91f1d43260b84d191e65ad3667fbaa905792f9e3e5333af6da6674259b85f" gracePeriod=10 Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.817873 4796 generic.go:334] "Generic (PLEG): container finished" podID="64d653a7-a2c8-439a-9b7c-733682c79eeb" containerID="9ca91f1d43260b84d191e65ad3667fbaa905792f9e3e5333af6da6674259b85f" exitCode=0 Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.817966 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" event={"ID":"64d653a7-a2c8-439a-9b7c-733682c79eeb","Type":"ContainerDied","Data":"9ca91f1d43260b84d191e65ad3667fbaa905792f9e3e5333af6da6674259b85f"} Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.822175 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tt8qv" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.822708 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tt8qv" event={"ID":"b0493d28-3276-4a85-a800-4d0b1576c407","Type":"ContainerDied","Data":"a55c80ed56705fa77b41622708791b00e8577d16f50aed9a671e062dab269c28"} Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.822734 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a55c80ed56705fa77b41622708791b00e8577d16f50aed9a671e062dab269c28" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.822749 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 14:46:17 crc kubenswrapper[4796]: I1125 14:46:17.860791 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" podUID="64d653a7-a2c8-439a-9b7c-733682c79eeb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.086729 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 14:46:18 crc kubenswrapper[4796]: E1125 14:46:18.087323 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0493d28-3276-4a85-a800-4d0b1576c407" containerName="cinder-db-sync" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.087392 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0493d28-3276-4a85-a800-4d0b1576c407" containerName="cinder-db-sync" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.087689 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0493d28-3276-4a85-a800-4d0b1576c407" containerName="cinder-db-sync" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.088743 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.103340 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.106966 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jzp4t" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.107190 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.107281 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.133388 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.149563 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-96sth"] Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.151131 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.173640 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-96sth"] Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.203634 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.203664 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.203742 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpmpj\" (UniqueName: \"kubernetes.io/projected/ef983792-84b3-4cd2-86be-253f5619b093-kube-api-access-hpmpj\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.203770 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.203788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.203838 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef983792-84b3-4cd2-86be-253f5619b093-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.310478 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.310534 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.310621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.310726 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-config\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.310779 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef983792-84b3-4cd2-86be-253f5619b093-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.310875 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef983792-84b3-4cd2-86be-253f5619b093-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.311075 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.311121 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.311153 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.311187 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvwb4\" (UniqueName: \"kubernetes.io/projected/735e664e-6d33-446e-96dd-bd86dbe45ec3-kube-api-access-fvwb4\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.311227 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.311309 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.311364 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpmpj\" (UniqueName: \"kubernetes.io/projected/ef983792-84b3-4cd2-86be-253f5619b093-kube-api-access-hpmpj\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.330931 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.333803 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.334138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.335201 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.346392 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpmpj\" (UniqueName: \"kubernetes.io/projected/ef983792-84b3-4cd2-86be-253f5619b093-kube-api-access-hpmpj\") pod \"cinder-scheduler-0\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.360174 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.362066 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.363457 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.369523 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.413050 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.413115 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-config\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.413181 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.413215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvwb4\" (UniqueName: \"kubernetes.io/projected/735e664e-6d33-446e-96dd-bd86dbe45ec3-kube-api-access-fvwb4\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.413236 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.413692 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.414195 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.414242 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.414305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-config\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.415433 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.415896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.430026 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.444315 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvwb4\" (UniqueName: \"kubernetes.io/projected/735e664e-6d33-446e-96dd-bd86dbe45ec3-kube-api-access-fvwb4\") pod \"dnsmasq-dns-5c9776ccc5-96sth\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.507551 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.516031 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.516417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.516491 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22892637-d36e-4ef8-b0cb-482c0e5012cf-logs\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.516514 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-scripts\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.516557 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data-custom\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.516638 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vd6w\" (UniqueName: \"kubernetes.io/projected/22892637-d36e-4ef8-b0cb-482c0e5012cf-kube-api-access-2vd6w\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.516659 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22892637-d36e-4ef8-b0cb-482c0e5012cf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.600251 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.618430 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vd6w\" (UniqueName: \"kubernetes.io/projected/22892637-d36e-4ef8-b0cb-482c0e5012cf-kube-api-access-2vd6w\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.618475 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22892637-d36e-4ef8-b0cb-482c0e5012cf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.618537 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.618586 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.618632 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22892637-d36e-4ef8-b0cb-482c0e5012cf-logs\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.618653 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-scripts\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.618686 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data-custom\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.620042 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22892637-d36e-4ef8-b0cb-482c0e5012cf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.620547 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22892637-d36e-4ef8-b0cb-482c0e5012cf-logs\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.622726 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data-custom\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.623972 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.627131 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-scripts\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.645172 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vd6w\" (UniqueName: \"kubernetes.io/projected/22892637-d36e-4ef8-b0cb-482c0e5012cf-kube-api-access-2vd6w\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.645435 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data\") pod \"cinder-api-0\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.719329 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-svc\") pod \"64d653a7-a2c8-439a-9b7c-733682c79eeb\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.719401 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ppq4\" (UniqueName: \"kubernetes.io/projected/64d653a7-a2c8-439a-9b7c-733682c79eeb-kube-api-access-2ppq4\") pod \"64d653a7-a2c8-439a-9b7c-733682c79eeb\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.719453 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-nb\") pod \"64d653a7-a2c8-439a-9b7c-733682c79eeb\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.719561 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-sb\") pod \"64d653a7-a2c8-439a-9b7c-733682c79eeb\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.719713 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-config\") pod \"64d653a7-a2c8-439a-9b7c-733682c79eeb\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.719753 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-swift-storage-0\") pod \"64d653a7-a2c8-439a-9b7c-733682c79eeb\" (UID: \"64d653a7-a2c8-439a-9b7c-733682c79eeb\") " Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.747723 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d653a7-a2c8-439a-9b7c-733682c79eeb-kube-api-access-2ppq4" (OuterVolumeSpecName: "kube-api-access-2ppq4") pod "64d653a7-a2c8-439a-9b7c-733682c79eeb" (UID: "64d653a7-a2c8-439a-9b7c-733682c79eeb"). InnerVolumeSpecName "kube-api-access-2ppq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.823530 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ppq4\" (UniqueName: \"kubernetes.io/projected/64d653a7-a2c8-439a-9b7c-733682c79eeb-kube-api-access-2ppq4\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.825105 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64d653a7-a2c8-439a-9b7c-733682c79eeb" (UID: "64d653a7-a2c8-439a-9b7c-733682c79eeb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.831229 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64d653a7-a2c8-439a-9b7c-733682c79eeb" (UID: "64d653a7-a2c8-439a-9b7c-733682c79eeb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.834701 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.835675 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-d58n6" event={"ID":"64d653a7-a2c8-439a-9b7c-733682c79eeb","Type":"ContainerDied","Data":"76b47d57e3d4eecb2e98e6e506d618898a481cd8845ec3e3285b5828e46bc93e"} Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.835726 4796 scope.go:117] "RemoveContainer" containerID="9ca91f1d43260b84d191e65ad3667fbaa905792f9e3e5333af6da6674259b85f" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.850231 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.856644 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-config" (OuterVolumeSpecName: "config") pod "64d653a7-a2c8-439a-9b7c-733682c79eeb" (UID: "64d653a7-a2c8-439a-9b7c-733682c79eeb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.857763 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64d653a7-a2c8-439a-9b7c-733682c79eeb" (UID: "64d653a7-a2c8-439a-9b7c-733682c79eeb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.921589 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64d653a7-a2c8-439a-9b7c-733682c79eeb" (UID: "64d653a7-a2c8-439a-9b7c-733682c79eeb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.950444 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.950483 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.950499 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.950530 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.950542 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64d653a7-a2c8-439a-9b7c-733682c79eeb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.973886 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 14:46:18 crc kubenswrapper[4796]: I1125 14:46:18.993632 4796 scope.go:117] "RemoveContainer" containerID="eb5462985df4501cd386f1d622c7d422cbb32538ceadcb98b67632635cf653e7" Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.130165 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-96sth"] Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.174123 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d58n6"] Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.185696 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-d58n6"] Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.437067 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 14:46:19 crc kubenswrapper[4796]: W1125 14:46:19.441135 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22892637_d36e_4ef8_b0cb_482c0e5012cf.slice/crio-85768aed86914c6687093fd2fa92300b5219f599ba64a6448c6a140662fdd636 WatchSource:0}: Error finding container 85768aed86914c6687093fd2fa92300b5219f599ba64a6448c6a140662fdd636: Status 404 returned error can't find the container with id 85768aed86914c6687093fd2fa92300b5219f599ba64a6448c6a140662fdd636 Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.514467 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.514530 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.851423 4796 generic.go:334] "Generic (PLEG): container finished" podID="735e664e-6d33-446e-96dd-bd86dbe45ec3" containerID="02a03b6f6600327c3c7299cdd159c7738d649e4be9e59a51d99dd4d4863024c5" exitCode=0 Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.851552 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" event={"ID":"735e664e-6d33-446e-96dd-bd86dbe45ec3","Type":"ContainerDied","Data":"02a03b6f6600327c3c7299cdd159c7738d649e4be9e59a51d99dd4d4863024c5"} Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.851653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" event={"ID":"735e664e-6d33-446e-96dd-bd86dbe45ec3","Type":"ContainerStarted","Data":"adf6cceec43e837ba7724fd7422def760ed13c685ef7e96c7511cb0a13d06f45"} Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.859145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22892637-d36e-4ef8-b0cb-482c0e5012cf","Type":"ContainerStarted","Data":"85768aed86914c6687093fd2fa92300b5219f599ba64a6448c6a140662fdd636"} Nov 25 14:46:19 crc kubenswrapper[4796]: I1125 14:46:19.860664 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef983792-84b3-4cd2-86be-253f5619b093","Type":"ContainerStarted","Data":"bb49e3936e1bd49db2d4ae6e4f60297d7efe1abb916c7a14697474f70bf7a1a4"} Nov 25 14:46:20 crc kubenswrapper[4796]: I1125 14:46:20.339302 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 14:46:20 crc kubenswrapper[4796]: I1125 14:46:20.428121 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d653a7-a2c8-439a-9b7c-733682c79eeb" path="/var/lib/kubelet/pods/64d653a7-a2c8-439a-9b7c-733682c79eeb/volumes" Nov 25 14:46:20 crc kubenswrapper[4796]: I1125 14:46:20.888548 4796 generic.go:334] "Generic (PLEG): container finished" podID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerID="11b12a44fb12af68b288b537b801c1396328386b617bea8abdea96586dfce0b0" exitCode=0 Nov 25 14:46:20 crc kubenswrapper[4796]: I1125 14:46:20.888641 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cd9956864-5xkx5" event={"ID":"23942f6c-a777-4b11-a51d-ccaee1fff6e7","Type":"ContainerDied","Data":"11b12a44fb12af68b288b537b801c1396328386b617bea8abdea96586dfce0b0"} Nov 25 14:46:20 crc kubenswrapper[4796]: I1125 14:46:20.902294 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22892637-d36e-4ef8-b0cb-482c0e5012cf","Type":"ContainerStarted","Data":"aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f"} Nov 25 14:46:20 crc kubenswrapper[4796]: I1125 14:46:20.914919 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef983792-84b3-4cd2-86be-253f5619b093","Type":"ContainerStarted","Data":"ffcf66d72f507eb2126b8be605617c121e6421f484b7a6753fb46edab71ced5b"} Nov 25 14:46:20 crc kubenswrapper[4796]: I1125 14:46:20.931715 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" event={"ID":"735e664e-6d33-446e-96dd-bd86dbe45ec3","Type":"ContainerStarted","Data":"b25bd8f142a5225e6ea6fc9720f89d5e05a4a43db5f5ac132bf232d20b31b182"} Nov 25 14:46:20 crc kubenswrapper[4796]: I1125 14:46:20.932753 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:21 crc kubenswrapper[4796]: I1125 14:46:21.941113 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22892637-d36e-4ef8-b0cb-482c0e5012cf","Type":"ContainerStarted","Data":"5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7"} Nov 25 14:46:21 crc kubenswrapper[4796]: I1125 14:46:21.941482 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 14:46:21 crc kubenswrapper[4796]: I1125 14:46:21.941312 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerName="cinder-api" containerID="cri-o://5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7" gracePeriod=30 Nov 25 14:46:21 crc kubenswrapper[4796]: I1125 14:46:21.941208 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerName="cinder-api-log" containerID="cri-o://aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f" gracePeriod=30 Nov 25 14:46:21 crc kubenswrapper[4796]: I1125 14:46:21.946206 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef983792-84b3-4cd2-86be-253f5619b093","Type":"ContainerStarted","Data":"cfb87b0e7fd7ad2ac439c643785c7d7eb6689e5670b4bfc44f9e2c30de306418"} Nov 25 14:46:21 crc kubenswrapper[4796]: I1125 14:46:21.967834 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" podStartSLOduration=3.96781851 podStartE2EDuration="3.96781851s" podCreationTimestamp="2025-11-25 14:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:20.955918825 +0000 UTC m=+1309.299028249" watchObservedRunningTime="2025-11-25 14:46:21.96781851 +0000 UTC m=+1310.310927934" Nov 25 14:46:21 crc kubenswrapper[4796]: I1125 14:46:21.973159 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.973143067 podStartE2EDuration="3.973143067s" podCreationTimestamp="2025-11-25 14:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:21.964352603 +0000 UTC m=+1310.307462037" watchObservedRunningTime="2025-11-25 14:46:21.973143067 +0000 UTC m=+1310.316252491" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.042171 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b8bdcff86-mhf8m" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:32798->10.217.0.160:9311: read: connection reset by peer" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.042218 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b8bdcff86-mhf8m" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:32812->10.217.0.160:9311: read: connection reset by peer" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.316698 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7cd9956864-5xkx5" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.658807 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.665977 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.684768 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.681511114 podStartE2EDuration="4.684750831s" podCreationTimestamp="2025-11-25 14:46:18 +0000 UTC" firstStartedPulling="2025-11-25 14:46:19.036743714 +0000 UTC m=+1307.379853138" lastFinishedPulling="2025-11-25 14:46:20.039983431 +0000 UTC m=+1308.383092855" observedRunningTime="2025-11-25 14:46:21.98833407 +0000 UTC m=+1310.331443494" watchObservedRunningTime="2025-11-25 14:46:22.684750831 +0000 UTC m=+1311.027860255" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730328 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-combined-ca-bundle\") pod \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730447 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data\") pod \"22892637-d36e-4ef8-b0cb-482c0e5012cf\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730475 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-logs\") pod \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730507 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data-custom\") pod \"22892637-d36e-4ef8-b0cb-482c0e5012cf\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730547 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22892637-d36e-4ef8-b0cb-482c0e5012cf-logs\") pod \"22892637-d36e-4ef8-b0cb-482c0e5012cf\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730564 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22892637-d36e-4ef8-b0cb-482c0e5012cf-etc-machine-id\") pod \"22892637-d36e-4ef8-b0cb-482c0e5012cf\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730676 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data-custom\") pod \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730696 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-combined-ca-bundle\") pod \"22892637-d36e-4ef8-b0cb-482c0e5012cf\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730727 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-scripts\") pod \"22892637-d36e-4ef8-b0cb-482c0e5012cf\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730761 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6t9w\" (UniqueName: \"kubernetes.io/projected/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-kube-api-access-d6t9w\") pod \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730818 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data\") pod \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\" (UID: \"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.730836 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vd6w\" (UniqueName: \"kubernetes.io/projected/22892637-d36e-4ef8-b0cb-482c0e5012cf-kube-api-access-2vd6w\") pod \"22892637-d36e-4ef8-b0cb-482c0e5012cf\" (UID: \"22892637-d36e-4ef8-b0cb-482c0e5012cf\") " Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.731256 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22892637-d36e-4ef8-b0cb-482c0e5012cf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "22892637-d36e-4ef8-b0cb-482c0e5012cf" (UID: "22892637-d36e-4ef8-b0cb-482c0e5012cf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.731636 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22892637-d36e-4ef8-b0cb-482c0e5012cf-logs" (OuterVolumeSpecName: "logs") pod "22892637-d36e-4ef8-b0cb-482c0e5012cf" (UID: "22892637-d36e-4ef8-b0cb-482c0e5012cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.731815 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-logs" (OuterVolumeSpecName: "logs") pod "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" (UID: "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.732062 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.732079 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22892637-d36e-4ef8-b0cb-482c0e5012cf-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.732091 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/22892637-d36e-4ef8-b0cb-482c0e5012cf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.736767 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "22892637-d36e-4ef8-b0cb-482c0e5012cf" (UID: "22892637-d36e-4ef8-b0cb-482c0e5012cf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.737106 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" (UID: "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.738681 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22892637-d36e-4ef8-b0cb-482c0e5012cf-kube-api-access-2vd6w" (OuterVolumeSpecName: "kube-api-access-2vd6w") pod "22892637-d36e-4ef8-b0cb-482c0e5012cf" (UID: "22892637-d36e-4ef8-b0cb-482c0e5012cf"). InnerVolumeSpecName "kube-api-access-2vd6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.739296 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-scripts" (OuterVolumeSpecName: "scripts") pod "22892637-d36e-4ef8-b0cb-482c0e5012cf" (UID: "22892637-d36e-4ef8-b0cb-482c0e5012cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.739692 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-kube-api-access-d6t9w" (OuterVolumeSpecName: "kube-api-access-d6t9w") pod "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" (UID: "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b"). InnerVolumeSpecName "kube-api-access-d6t9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.764819 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22892637-d36e-4ef8-b0cb-482c0e5012cf" (UID: "22892637-d36e-4ef8-b0cb-482c0e5012cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.767330 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" (UID: "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.799682 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data" (OuterVolumeSpecName: "config-data") pod "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" (UID: "bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.809708 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data" (OuterVolumeSpecName: "config-data") pod "22892637-d36e-4ef8-b0cb-482c0e5012cf" (UID: "22892637-d36e-4ef8-b0cb-482c0e5012cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.833495 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.833527 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vd6w\" (UniqueName: \"kubernetes.io/projected/22892637-d36e-4ef8-b0cb-482c0e5012cf-kube-api-access-2vd6w\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.833538 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.833547 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.833555 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.833563 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.833581 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.833591 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22892637-d36e-4ef8-b0cb-482c0e5012cf-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.833603 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6t9w\" (UniqueName: \"kubernetes.io/projected/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b-kube-api-access-d6t9w\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.958498 4796 generic.go:334] "Generic (PLEG): container finished" podID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerID="5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7" exitCode=0 Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.958539 4796 generic.go:334] "Generic (PLEG): container finished" podID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerID="aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f" exitCode=143 Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.958592 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.958624 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22892637-d36e-4ef8-b0cb-482c0e5012cf","Type":"ContainerDied","Data":"5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7"} Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.958660 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22892637-d36e-4ef8-b0cb-482c0e5012cf","Type":"ContainerDied","Data":"aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f"} Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.958673 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"22892637-d36e-4ef8-b0cb-482c0e5012cf","Type":"ContainerDied","Data":"85768aed86914c6687093fd2fa92300b5219f599ba64a6448c6a140662fdd636"} Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.958690 4796 scope.go:117] "RemoveContainer" containerID="5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7" Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.961492 4796 generic.go:334] "Generic (PLEG): container finished" podID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerID="831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9" exitCode=0 Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.961920 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8bdcff86-mhf8m" event={"ID":"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b","Type":"ContainerDied","Data":"831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9"} Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.961957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8bdcff86-mhf8m" event={"ID":"bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b","Type":"ContainerDied","Data":"7bda8ede50ec466dbe97f1e4f83a396e7c4b5af2d8b04639d0f4afcf834c0f93"} Nov 25 14:46:22 crc kubenswrapper[4796]: I1125 14:46:22.962015 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8bdcff86-mhf8m" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.009162 4796 scope.go:117] "RemoveContainer" containerID="aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.009298 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b8bdcff86-mhf8m"] Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.018233 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b8bdcff86-mhf8m"] Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.027399 4796 scope.go:117] "RemoveContainer" containerID="5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.027707 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.027870 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7\": container with ID starting with 5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7 not found: ID does not exist" containerID="5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.027908 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7"} err="failed to get container status \"5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7\": rpc error: code = NotFound desc = could not find container \"5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7\": container with ID starting with 5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7 not found: ID does not exist" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.027937 4796 scope.go:117] "RemoveContainer" containerID="aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f" Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.029009 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f\": container with ID starting with aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f not found: ID does not exist" containerID="aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.029057 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f"} err="failed to get container status \"aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f\": rpc error: code = NotFound desc = could not find container \"aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f\": container with ID starting with aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f not found: ID does not exist" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.029083 4796 scope.go:117] "RemoveContainer" containerID="5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.030911 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7"} err="failed to get container status \"5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7\": rpc error: code = NotFound desc = could not find container \"5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7\": container with ID starting with 5be2c7232030c68266920eecd3fef17ead91914c675ca9a0516c54866ccd6bf7 not found: ID does not exist" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.030941 4796 scope.go:117] "RemoveContainer" containerID="aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.032239 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f"} err="failed to get container status \"aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f\": rpc error: code = NotFound desc = could not find container \"aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f\": container with ID starting with aac0f75944c4db76ada05cc25de9cd28e8bfbdef216c58d19103e97bc45c5c6f not found: ID does not exist" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.032265 4796 scope.go:117] "RemoveContainer" containerID="831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.060385 4796 scope.go:117] "RemoveContainer" containerID="3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.063757 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.071531 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.072785 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api-log" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.072810 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api-log" Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.072943 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerName="cinder-api" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.072955 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerName="cinder-api" Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.072982 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d653a7-a2c8-439a-9b7c-733682c79eeb" containerName="dnsmasq-dns" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.073020 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d653a7-a2c8-439a-9b7c-733682c79eeb" containerName="dnsmasq-dns" Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.073031 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerName="cinder-api-log" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.073039 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerName="cinder-api-log" Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.073072 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d653a7-a2c8-439a-9b7c-733682c79eeb" containerName="init" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.073079 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d653a7-a2c8-439a-9b7c-733682c79eeb" containerName="init" Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.073091 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.073098 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.073332 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerName="cinder-api" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.073359 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api-log" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.073374 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="22892637-d36e-4ef8-b0cb-482c0e5012cf" containerName="cinder-api-log" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.073384 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.073400 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d653a7-a2c8-439a-9b7c-733682c79eeb" containerName="dnsmasq-dns" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.074754 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.078465 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.078676 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.078822 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.079789 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.127499 4796 scope.go:117] "RemoveContainer" containerID="831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9" Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.127777 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9\": container with ID starting with 831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9 not found: ID does not exist" containerID="831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.127849 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9"} err="failed to get container status \"831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9\": rpc error: code = NotFound desc = could not find container \"831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9\": container with ID starting with 831512dbcd0b153ba9758ce12dff538a6b8ad577f50bd470280149b4e22be0e9 not found: ID does not exist" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.127884 4796 scope.go:117] "RemoveContainer" containerID="3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275" Nov 25 14:46:23 crc kubenswrapper[4796]: E1125 14:46:23.128143 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275\": container with ID starting with 3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275 not found: ID does not exist" containerID="3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.128175 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275"} err="failed to get container status \"3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275\": rpc error: code = NotFound desc = could not find container \"3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275\": container with ID starting with 3e3e2043a5944e072c9b65119ebacac447343c43b93c2d4f121cec4465088275 not found: ID does not exist" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.140543 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/213ec08a-1b84-45bb-a867-7f077f18c908-logs\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.140625 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-config-data\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.140648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.140673 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-scripts\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.140755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-config-data-custom\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.140774 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/213ec08a-1b84-45bb-a867-7f077f18c908-etc-machine-id\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.140848 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jrrj\" (UniqueName: \"kubernetes.io/projected/213ec08a-1b84-45bb-a867-7f077f18c908-kube-api-access-5jrrj\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.140867 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.140883 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-public-tls-certs\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.242911 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-config-data-custom\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.242962 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/213ec08a-1b84-45bb-a867-7f077f18c908-etc-machine-id\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.243035 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jrrj\" (UniqueName: \"kubernetes.io/projected/213ec08a-1b84-45bb-a867-7f077f18c908-kube-api-access-5jrrj\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.243052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.243069 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-public-tls-certs\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.243178 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/213ec08a-1b84-45bb-a867-7f077f18c908-etc-machine-id\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.243768 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/213ec08a-1b84-45bb-a867-7f077f18c908-logs\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.243829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-config-data\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.243854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.243882 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-scripts\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.244108 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/213ec08a-1b84-45bb-a867-7f077f18c908-logs\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.247421 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.248284 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-config-data\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.248422 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-config-data-custom\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.248435 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-public-tls-certs\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.252717 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-scripts\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.261708 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/213ec08a-1b84-45bb-a867-7f077f18c908-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.265007 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jrrj\" (UniqueName: \"kubernetes.io/projected/213ec08a-1b84-45bb-a867-7f077f18c908-kube-api-access-5jrrj\") pod \"cinder-api-0\" (UID: \"213ec08a-1b84-45bb-a867-7f077f18c908\") " pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.422614 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.430725 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.911812 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 14:46:23 crc kubenswrapper[4796]: W1125 14:46:23.914613 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod213ec08a_1b84_45bb_a867_7f077f18c908.slice/crio-932d975c42024bdb143b9bd30f152de1e544108ddd468c3cdc7f1c5b75cb5022 WatchSource:0}: Error finding container 932d975c42024bdb143b9bd30f152de1e544108ddd468c3cdc7f1c5b75cb5022: Status 404 returned error can't find the container with id 932d975c42024bdb143b9bd30f152de1e544108ddd468c3cdc7f1c5b75cb5022 Nov 25 14:46:23 crc kubenswrapper[4796]: I1125 14:46:23.975355 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"213ec08a-1b84-45bb-a867-7f077f18c908","Type":"ContainerStarted","Data":"932d975c42024bdb143b9bd30f152de1e544108ddd468c3cdc7f1c5b75cb5022"} Nov 25 14:46:24 crc kubenswrapper[4796]: I1125 14:46:24.424957 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22892637-d36e-4ef8-b0cb-482c0e5012cf" path="/var/lib/kubelet/pods/22892637-d36e-4ef8-b0cb-482c0e5012cf/volumes" Nov 25 14:46:24 crc kubenswrapper[4796]: I1125 14:46:24.426097 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" path="/var/lib/kubelet/pods/bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b/volumes" Nov 25 14:46:24 crc kubenswrapper[4796]: I1125 14:46:24.990352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"213ec08a-1b84-45bb-a867-7f077f18c908","Type":"ContainerStarted","Data":"0731a2882bbc8e82d0014958c873426f99b09b72ff3bb61c8db61bf74ff9355e"} Nov 25 14:46:25 crc kubenswrapper[4796]: I1125 14:46:25.329976 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5d994c97d7-9qxnr" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.000699 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"213ec08a-1b84-45bb-a867-7f077f18c908","Type":"ContainerStarted","Data":"a98ca062eda486db3abbbf488ad08d3815cd6ad71df910f8848fc7d0c224ca25"} Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.002207 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.016379 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.017605 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.019695 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.020188 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-c9lsx" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.020720 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.026794 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.046665 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.046644228 podStartE2EDuration="3.046644228s" podCreationTimestamp="2025-11-25 14:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:26.029305088 +0000 UTC m=+1314.372414532" watchObservedRunningTime="2025-11-25 14:46:26.046644228 +0000 UTC m=+1314.389753662" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.133146 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.133333 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ncc\" (UniqueName: \"kubernetes.io/projected/08d84c00-9bd8-4459-a0da-bdec85c52986-kube-api-access-94ncc\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.133386 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-combined-ca-bundle\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.133459 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config-secret\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.235740 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.236064 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94ncc\" (UniqueName: \"kubernetes.io/projected/08d84c00-9bd8-4459-a0da-bdec85c52986-kube-api-access-94ncc\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.236086 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-combined-ca-bundle\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.236116 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config-secret\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.237740 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.242118 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config-secret\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.242754 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-combined-ca-bundle\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.259633 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94ncc\" (UniqueName: \"kubernetes.io/projected/08d84c00-9bd8-4459-a0da-bdec85c52986-kube-api-access-94ncc\") pod \"openstackclient\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.337436 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.370669 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.435958 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.438662 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.440455 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.448918 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.572354 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/120f9ac5-531c-4821-b033-d4b316f6ea61-openstack-config-secret\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.572495 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/120f9ac5-531c-4821-b033-d4b316f6ea61-openstack-config\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: E1125 14:46:26.572526 4796 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 14:46:26 crc kubenswrapper[4796]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_08d84c00-9bd8-4459-a0da-bdec85c52986_0(d8beab1ba393ebe34543e96551abadd5f7655df5ed43bc3071a4d6595bb686bc): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d8beab1ba393ebe34543e96551abadd5f7655df5ed43bc3071a4d6595bb686bc" Netns:"/var/run/netns/2806ba0c-606a-47ec-b016-a9039e63f6c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d8beab1ba393ebe34543e96551abadd5f7655df5ed43bc3071a4d6595bb686bc;K8S_POD_UID=08d84c00-9bd8-4459-a0da-bdec85c52986" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/08d84c00-9bd8-4459-a0da-bdec85c52986]: expected pod UID "08d84c00-9bd8-4459-a0da-bdec85c52986" but got "120f9ac5-531c-4821-b033-d4b316f6ea61" from Kube API Nov 25 14:46:26 crc kubenswrapper[4796]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 14:46:26 crc kubenswrapper[4796]: > Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.572596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ffmg\" (UniqueName: \"kubernetes.io/projected/120f9ac5-531c-4821-b033-d4b316f6ea61-kube-api-access-2ffmg\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: E1125 14:46:26.572619 4796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 14:46:26 crc kubenswrapper[4796]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_08d84c00-9bd8-4459-a0da-bdec85c52986_0(d8beab1ba393ebe34543e96551abadd5f7655df5ed43bc3071a4d6595bb686bc): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d8beab1ba393ebe34543e96551abadd5f7655df5ed43bc3071a4d6595bb686bc" Netns:"/var/run/netns/2806ba0c-606a-47ec-b016-a9039e63f6c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=d8beab1ba393ebe34543e96551abadd5f7655df5ed43bc3071a4d6595bb686bc;K8S_POD_UID=08d84c00-9bd8-4459-a0da-bdec85c52986" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/08d84c00-9bd8-4459-a0da-bdec85c52986]: expected pod UID "08d84c00-9bd8-4459-a0da-bdec85c52986" but got "120f9ac5-531c-4821-b033-d4b316f6ea61" from Kube API Nov 25 14:46:26 crc kubenswrapper[4796]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 14:46:26 crc kubenswrapper[4796]: > pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.572740 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120f9ac5-531c-4821-b033-d4b316f6ea61-combined-ca-bundle\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.674150 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120f9ac5-531c-4821-b033-d4b316f6ea61-combined-ca-bundle\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.674264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/120f9ac5-531c-4821-b033-d4b316f6ea61-openstack-config-secret\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.674363 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/120f9ac5-531c-4821-b033-d4b316f6ea61-openstack-config\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.674433 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ffmg\" (UniqueName: \"kubernetes.io/projected/120f9ac5-531c-4821-b033-d4b316f6ea61-kube-api-access-2ffmg\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.675335 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/120f9ac5-531c-4821-b033-d4b316f6ea61-openstack-config\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.687699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120f9ac5-531c-4821-b033-d4b316f6ea61-combined-ca-bundle\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.689267 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/120f9ac5-531c-4821-b033-d4b316f6ea61-openstack-config-secret\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.697469 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ffmg\" (UniqueName: \"kubernetes.io/projected/120f9ac5-531c-4821-b033-d4b316f6ea61-kube-api-access-2ffmg\") pod \"openstackclient\" (UID: \"120f9ac5-531c-4821-b033-d4b316f6ea61\") " pod="openstack/openstackclient" Nov 25 14:46:26 crc kubenswrapper[4796]: I1125 14:46:26.828523 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.011337 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.022534 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="08d84c00-9bd8-4459-a0da-bdec85c52986" podUID="120f9ac5-531c-4821-b033-d4b316f6ea61" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.025040 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.183263 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94ncc\" (UniqueName: \"kubernetes.io/projected/08d84c00-9bd8-4459-a0da-bdec85c52986-kube-api-access-94ncc\") pod \"08d84c00-9bd8-4459-a0da-bdec85c52986\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.184206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config-secret\") pod \"08d84c00-9bd8-4459-a0da-bdec85c52986\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.184341 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config\") pod \"08d84c00-9bd8-4459-a0da-bdec85c52986\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.184393 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-combined-ca-bundle\") pod \"08d84c00-9bd8-4459-a0da-bdec85c52986\" (UID: \"08d84c00-9bd8-4459-a0da-bdec85c52986\") " Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.185780 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "08d84c00-9bd8-4459-a0da-bdec85c52986" (UID: "08d84c00-9bd8-4459-a0da-bdec85c52986"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.193055 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d84c00-9bd8-4459-a0da-bdec85c52986-kube-api-access-94ncc" (OuterVolumeSpecName: "kube-api-access-94ncc") pod "08d84c00-9bd8-4459-a0da-bdec85c52986" (UID: "08d84c00-9bd8-4459-a0da-bdec85c52986"). InnerVolumeSpecName "kube-api-access-94ncc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.194213 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08d84c00-9bd8-4459-a0da-bdec85c52986" (UID: "08d84c00-9bd8-4459-a0da-bdec85c52986"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.202005 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "08d84c00-9bd8-4459-a0da-bdec85c52986" (UID: "08d84c00-9bd8-4459-a0da-bdec85c52986"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.285487 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.285526 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/08d84c00-9bd8-4459-a0da-bdec85c52986-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.285535 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08d84c00-9bd8-4459-a0da-bdec85c52986-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.285543 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94ncc\" (UniqueName: \"kubernetes.io/projected/08d84c00-9bd8-4459-a0da-bdec85c52986-kube-api-access-94ncc\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.381327 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b8bdcff86-mhf8m" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.382093 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b8bdcff86-mhf8m" podUID="bcbcf9a0-da12-455b-8b7e-fb64c6dfdc0b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.437165 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 14:46:27 crc kubenswrapper[4796]: I1125 14:46:27.894597 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.021547 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.021539 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"120f9ac5-531c-4821-b033-d4b316f6ea61","Type":"ContainerStarted","Data":"f3f3046e1c8e2f287de04a78ce88a8aab2d8de5f9549dbbc212b561161d3be8d"} Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.026382 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="08d84c00-9bd8-4459-a0da-bdec85c52986" podUID="120f9ac5-531c-4821-b033-d4b316f6ea61" Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.277100 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.316344 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79bd96dcd6-f2n5f" Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.422515 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d84c00-9bd8-4459-a0da-bdec85c52986" path="/var/lib/kubelet/pods/08d84c00-9bd8-4459-a0da-bdec85c52986/volumes" Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.512073 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.564438 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ddlb4"] Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.564695 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" podUID="7399716c-47ba-4a11-81b0-f206c95855df" containerName="dnsmasq-dns" containerID="cri-o://af003b11ed836695a7ec33f6dd33bdce86d25aac9629ff36547fbf1378a9ca96" gracePeriod=10 Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.766295 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 14:46:28 crc kubenswrapper[4796]: I1125 14:46:28.826528 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.048345 4796 generic.go:334] "Generic (PLEG): container finished" podID="7399716c-47ba-4a11-81b0-f206c95855df" containerID="af003b11ed836695a7ec33f6dd33bdce86d25aac9629ff36547fbf1378a9ca96" exitCode=0 Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.048761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" event={"ID":"7399716c-47ba-4a11-81b0-f206c95855df","Type":"ContainerDied","Data":"af003b11ed836695a7ec33f6dd33bdce86d25aac9629ff36547fbf1378a9ca96"} Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.048918 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ef983792-84b3-4cd2-86be-253f5619b093" containerName="cinder-scheduler" containerID="cri-o://ffcf66d72f507eb2126b8be605617c121e6421f484b7a6753fb46edab71ced5b" gracePeriod=30 Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.049030 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ef983792-84b3-4cd2-86be-253f5619b093" containerName="probe" containerID="cri-o://cfb87b0e7fd7ad2ac439c643785c7d7eb6689e5670b4bfc44f9e2c30de306418" gracePeriod=30 Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.132231 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.325135 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-svc\") pod \"7399716c-47ba-4a11-81b0-f206c95855df\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.325187 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-swift-storage-0\") pod \"7399716c-47ba-4a11-81b0-f206c95855df\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.325295 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-nb\") pod \"7399716c-47ba-4a11-81b0-f206c95855df\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.325390 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lklfg\" (UniqueName: \"kubernetes.io/projected/7399716c-47ba-4a11-81b0-f206c95855df-kube-api-access-lklfg\") pod \"7399716c-47ba-4a11-81b0-f206c95855df\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.325411 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-config\") pod \"7399716c-47ba-4a11-81b0-f206c95855df\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.325433 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-sb\") pod \"7399716c-47ba-4a11-81b0-f206c95855df\" (UID: \"7399716c-47ba-4a11-81b0-f206c95855df\") " Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.359452 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7399716c-47ba-4a11-81b0-f206c95855df-kube-api-access-lklfg" (OuterVolumeSpecName: "kube-api-access-lklfg") pod "7399716c-47ba-4a11-81b0-f206c95855df" (UID: "7399716c-47ba-4a11-81b0-f206c95855df"). InnerVolumeSpecName "kube-api-access-lklfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.427678 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lklfg\" (UniqueName: \"kubernetes.io/projected/7399716c-47ba-4a11-81b0-f206c95855df-kube-api-access-lklfg\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.475789 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7399716c-47ba-4a11-81b0-f206c95855df" (UID: "7399716c-47ba-4a11-81b0-f206c95855df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.491348 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7399716c-47ba-4a11-81b0-f206c95855df" (UID: "7399716c-47ba-4a11-81b0-f206c95855df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.492114 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-config" (OuterVolumeSpecName: "config") pod "7399716c-47ba-4a11-81b0-f206c95855df" (UID: "7399716c-47ba-4a11-81b0-f206c95855df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.497383 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7399716c-47ba-4a11-81b0-f206c95855df" (UID: "7399716c-47ba-4a11-81b0-f206c95855df"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.506897 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7399716c-47ba-4a11-81b0-f206c95855df" (UID: "7399716c-47ba-4a11-81b0-f206c95855df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.529365 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.529398 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.529409 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.529419 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:29 crc kubenswrapper[4796]: I1125 14:46:29.529428 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7399716c-47ba-4a11-81b0-f206c95855df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.061620 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" event={"ID":"7399716c-47ba-4a11-81b0-f206c95855df","Type":"ContainerDied","Data":"c7e75af85b1a339d5a8e03028ede166c4a56861d15721cd2270da93a2e1b5d70"} Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.061673 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ddlb4" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.062006 4796 scope.go:117] "RemoveContainer" containerID="af003b11ed836695a7ec33f6dd33bdce86d25aac9629ff36547fbf1378a9ca96" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.078429 4796 generic.go:334] "Generic (PLEG): container finished" podID="ef983792-84b3-4cd2-86be-253f5619b093" containerID="cfb87b0e7fd7ad2ac439c643785c7d7eb6689e5670b4bfc44f9e2c30de306418" exitCode=0 Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.078468 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef983792-84b3-4cd2-86be-253f5619b093","Type":"ContainerDied","Data":"cfb87b0e7fd7ad2ac439c643785c7d7eb6689e5670b4bfc44f9e2c30de306418"} Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.102897 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ddlb4"] Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.109987 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ddlb4"] Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.121940 4796 scope.go:117] "RemoveContainer" containerID="c4548c58be93c87fce86465b9a44e96e0ffdf4db70952ba6e83fe1bb8bd3ed02" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.150616 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b8d7f79d9-dhp4t" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.228159 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f9cd6669d-kmwxf"] Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.228368 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f9cd6669d-kmwxf" podUID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerName="neutron-api" containerID="cri-o://01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9" gracePeriod=30 Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.228748 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f9cd6669d-kmwxf" podUID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerName="neutron-httpd" containerID="cri-o://c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb" gracePeriod=30 Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.418979 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7399716c-47ba-4a11-81b0-f206c95855df" path="/var/lib/kubelet/pods/7399716c-47ba-4a11-81b0-f206c95855df/volumes" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.878476 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6b6dc55d99-xcq8j"] Nov 25 14:46:30 crc kubenswrapper[4796]: E1125 14:46:30.879114 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7399716c-47ba-4a11-81b0-f206c95855df" containerName="init" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.879130 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7399716c-47ba-4a11-81b0-f206c95855df" containerName="init" Nov 25 14:46:30 crc kubenswrapper[4796]: E1125 14:46:30.879172 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7399716c-47ba-4a11-81b0-f206c95855df" containerName="dnsmasq-dns" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.879179 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7399716c-47ba-4a11-81b0-f206c95855df" containerName="dnsmasq-dns" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.879357 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7399716c-47ba-4a11-81b0-f206c95855df" containerName="dnsmasq-dns" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.881876 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.883974 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.885002 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.885429 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 14:46:30 crc kubenswrapper[4796]: I1125 14:46:30.910094 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b6dc55d99-xcq8j"] Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.068555 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-public-tls-certs\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.068717 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzjfk\" (UniqueName: \"kubernetes.io/projected/05a9e311-75a5-4732-9103-ba2bc1e708ad-kube-api-access-qzjfk\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.068817 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05a9e311-75a5-4732-9103-ba2bc1e708ad-log-httpd\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.068849 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-config-data\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.068874 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/05a9e311-75a5-4732-9103-ba2bc1e708ad-etc-swift\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.068911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-combined-ca-bundle\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.068959 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-internal-tls-certs\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.068992 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05a9e311-75a5-4732-9103-ba2bc1e708ad-run-httpd\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.089483 4796 generic.go:334] "Generic (PLEG): container finished" podID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerID="c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb" exitCode=0 Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.089521 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9cd6669d-kmwxf" event={"ID":"d8d5b61c-f184-4963-ba9f-9cf698fd8e60","Type":"ContainerDied","Data":"c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb"} Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.170614 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-config-data\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.170664 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/05a9e311-75a5-4732-9103-ba2bc1e708ad-etc-swift\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.170699 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-combined-ca-bundle\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.170721 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-internal-tls-certs\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.170741 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05a9e311-75a5-4732-9103-ba2bc1e708ad-run-httpd\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.170820 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-public-tls-certs\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.170856 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzjfk\" (UniqueName: \"kubernetes.io/projected/05a9e311-75a5-4732-9103-ba2bc1e708ad-kube-api-access-qzjfk\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.170890 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05a9e311-75a5-4732-9103-ba2bc1e708ad-log-httpd\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.173939 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05a9e311-75a5-4732-9103-ba2bc1e708ad-log-httpd\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.177089 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05a9e311-75a5-4732-9103-ba2bc1e708ad-run-httpd\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.181875 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/05a9e311-75a5-4732-9103-ba2bc1e708ad-etc-swift\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.196945 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-combined-ca-bundle\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.197006 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-internal-tls-certs\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.197190 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-config-data\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.197480 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05a9e311-75a5-4732-9103-ba2bc1e708ad-public-tls-certs\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.204282 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzjfk\" (UniqueName: \"kubernetes.io/projected/05a9e311-75a5-4732-9103-ba2bc1e708ad-kube-api-access-qzjfk\") pod \"swift-proxy-6b6dc55d99-xcq8j\" (UID: \"05a9e311-75a5-4732-9103-ba2bc1e708ad\") " pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.503508 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.532512 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.532846 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="ceilometer-central-agent" containerID="cri-o://168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740" gracePeriod=30 Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.533696 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="proxy-httpd" containerID="cri-o://0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028" gracePeriod=30 Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.533764 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="sg-core" containerID="cri-o://59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f" gracePeriod=30 Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.533806 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="ceilometer-notification-agent" containerID="cri-o://4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf" gracePeriod=30 Nov 25 14:46:31 crc kubenswrapper[4796]: I1125 14:46:31.542142 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.102240 4796 generic.go:334] "Generic (PLEG): container finished" podID="c801ed15-ca44-4901-a402-93224e1e73b6" containerID="0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028" exitCode=0 Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.102551 4796 generic.go:334] "Generic (PLEG): container finished" podID="c801ed15-ca44-4901-a402-93224e1e73b6" containerID="59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f" exitCode=2 Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.102567 4796 generic.go:334] "Generic (PLEG): container finished" podID="c801ed15-ca44-4901-a402-93224e1e73b6" containerID="168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740" exitCode=0 Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.102633 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerDied","Data":"0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028"} Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.102664 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerDied","Data":"59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f"} Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.102676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerDied","Data":"168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740"} Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.107867 4796 generic.go:334] "Generic (PLEG): container finished" podID="ef983792-84b3-4cd2-86be-253f5619b093" containerID="ffcf66d72f507eb2126b8be605617c121e6421f484b7a6753fb46edab71ced5b" exitCode=0 Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.107933 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef983792-84b3-4cd2-86be-253f5619b093","Type":"ContainerDied","Data":"ffcf66d72f507eb2126b8be605617c121e6421f484b7a6753fb46edab71ced5b"} Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.236486 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b6dc55d99-xcq8j"] Nov 25 14:46:32 crc kubenswrapper[4796]: W1125 14:46:32.241039 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05a9e311_75a5_4732_9103_ba2bc1e708ad.slice/crio-37581a46e4939fc87b69afdcf471c1864242c72b169baeb15a880a7db1dad577 WatchSource:0}: Error finding container 37581a46e4939fc87b69afdcf471c1864242c72b169baeb15a880a7db1dad577: Status 404 returned error can't find the container with id 37581a46e4939fc87b69afdcf471c1864242c72b169baeb15a880a7db1dad577 Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.316463 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7cd9956864-5xkx5" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.560841 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.733416 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef983792-84b3-4cd2-86be-253f5619b093-etc-machine-id\") pod \"ef983792-84b3-4cd2-86be-253f5619b093\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.733790 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-scripts\") pod \"ef983792-84b3-4cd2-86be-253f5619b093\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.733807 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-combined-ca-bundle\") pod \"ef983792-84b3-4cd2-86be-253f5619b093\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.733868 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpmpj\" (UniqueName: \"kubernetes.io/projected/ef983792-84b3-4cd2-86be-253f5619b093-kube-api-access-hpmpj\") pod \"ef983792-84b3-4cd2-86be-253f5619b093\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.733894 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data-custom\") pod \"ef983792-84b3-4cd2-86be-253f5619b093\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.733922 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data\") pod \"ef983792-84b3-4cd2-86be-253f5619b093\" (UID: \"ef983792-84b3-4cd2-86be-253f5619b093\") " Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.734180 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef983792-84b3-4cd2-86be-253f5619b093-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ef983792-84b3-4cd2-86be-253f5619b093" (UID: "ef983792-84b3-4cd2-86be-253f5619b093"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.734287 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef983792-84b3-4cd2-86be-253f5619b093-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.746197 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-scripts" (OuterVolumeSpecName: "scripts") pod "ef983792-84b3-4cd2-86be-253f5619b093" (UID: "ef983792-84b3-4cd2-86be-253f5619b093"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.746325 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef983792-84b3-4cd2-86be-253f5619b093-kube-api-access-hpmpj" (OuterVolumeSpecName: "kube-api-access-hpmpj") pod "ef983792-84b3-4cd2-86be-253f5619b093" (UID: "ef983792-84b3-4cd2-86be-253f5619b093"). InnerVolumeSpecName "kube-api-access-hpmpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.748861 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ef983792-84b3-4cd2-86be-253f5619b093" (UID: "ef983792-84b3-4cd2-86be-253f5619b093"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.808198 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef983792-84b3-4cd2-86be-253f5619b093" (UID: "ef983792-84b3-4cd2-86be-253f5619b093"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.835072 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.835336 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.835397 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpmpj\" (UniqueName: \"kubernetes.io/projected/ef983792-84b3-4cd2-86be-253f5619b093-kube-api-access-hpmpj\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.835453 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.904776 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data" (OuterVolumeSpecName: "config-data") pod "ef983792-84b3-4cd2-86be-253f5619b093" (UID: "ef983792-84b3-4cd2-86be-253f5619b093"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:32 crc kubenswrapper[4796]: I1125 14:46:32.937772 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef983792-84b3-4cd2-86be-253f5619b093-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.020976 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-l2nn8"] Nov 25 14:46:33 crc kubenswrapper[4796]: E1125 14:46:33.021362 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef983792-84b3-4cd2-86be-253f5619b093" containerName="cinder-scheduler" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.021383 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef983792-84b3-4cd2-86be-253f5619b093" containerName="cinder-scheduler" Nov 25 14:46:33 crc kubenswrapper[4796]: E1125 14:46:33.021422 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef983792-84b3-4cd2-86be-253f5619b093" containerName="probe" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.021430 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef983792-84b3-4cd2-86be-253f5619b093" containerName="probe" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.021666 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef983792-84b3-4cd2-86be-253f5619b093" containerName="cinder-scheduler" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.021693 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef983792-84b3-4cd2-86be-253f5619b093" containerName="probe" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.022302 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.026372 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-l2nn8"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.062971 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7zk\" (UniqueName: \"kubernetes.io/projected/391eabca-f0e8-49c8-b98b-c495a39f9d46-kube-api-access-5t7zk\") pod \"nova-api-db-create-l2nn8\" (UID: \"391eabca-f0e8-49c8-b98b-c495a39f9d46\") " pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.063102 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391eabca-f0e8-49c8-b98b-c495a39f9d46-operator-scripts\") pod \"nova-api-db-create-l2nn8\" (UID: \"391eabca-f0e8-49c8-b98b-c495a39f9d46\") " pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.126717 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f0b0-account-create-h8v4z"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.128029 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.130476 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.133809 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef983792-84b3-4cd2-86be-253f5619b093","Type":"ContainerDied","Data":"bb49e3936e1bd49db2d4ae6e4f60297d7efe1abb916c7a14697474f70bf7a1a4"} Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.133854 4796 scope.go:117] "RemoveContainer" containerID="cfb87b0e7fd7ad2ac439c643785c7d7eb6689e5670b4bfc44f9e2c30de306418" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.133958 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.138964 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f0b0-account-create-h8v4z"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.164321 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391eabca-f0e8-49c8-b98b-c495a39f9d46-operator-scripts\") pod \"nova-api-db-create-l2nn8\" (UID: \"391eabca-f0e8-49c8-b98b-c495a39f9d46\") " pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.164423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7zk\" (UniqueName: \"kubernetes.io/projected/391eabca-f0e8-49c8-b98b-c495a39f9d46-kube-api-access-5t7zk\") pod \"nova-api-db-create-l2nn8\" (UID: \"391eabca-f0e8-49c8-b98b-c495a39f9d46\") " pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.165418 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391eabca-f0e8-49c8-b98b-c495a39f9d46-operator-scripts\") pod \"nova-api-db-create-l2nn8\" (UID: \"391eabca-f0e8-49c8-b98b-c495a39f9d46\") " pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.178376 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" event={"ID":"05a9e311-75a5-4732-9103-ba2bc1e708ad","Type":"ContainerStarted","Data":"40baf2c2ef16b2312076c1f8000acf7eb573c08c81be94d3d82a86fa77eacc87"} Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.178424 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" event={"ID":"05a9e311-75a5-4732-9103-ba2bc1e708ad","Type":"ContainerStarted","Data":"b0c49a2776f5b82b4c264ca5a00cd460dd9db7094a8ae525ad76f1967d3503a0"} Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.178438 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" event={"ID":"05a9e311-75a5-4732-9103-ba2bc1e708ad","Type":"ContainerStarted","Data":"37581a46e4939fc87b69afdcf471c1864242c72b169baeb15a880a7db1dad577"} Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.178837 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.178970 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.212231 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7zk\" (UniqueName: \"kubernetes.io/projected/391eabca-f0e8-49c8-b98b-c495a39f9d46-kube-api-access-5t7zk\") pod \"nova-api-db-create-l2nn8\" (UID: \"391eabca-f0e8-49c8-b98b-c495a39f9d46\") " pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.229383 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.232822 4796 scope.go:117] "RemoveContainer" containerID="ffcf66d72f507eb2126b8be605617c121e6421f484b7a6753fb46edab71ced5b" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.241683 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.267162 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzfhd\" (UniqueName: \"kubernetes.io/projected/c147d18a-4e11-41c0-87fc-628ab428482b-kube-api-access-hzfhd\") pod \"nova-api-f0b0-account-create-h8v4z\" (UID: \"c147d18a-4e11-41c0-87fc-628ab428482b\") " pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.267266 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c147d18a-4e11-41c0-87fc-628ab428482b-operator-scripts\") pod \"nova-api-f0b0-account-create-h8v4z\" (UID: \"c147d18a-4e11-41c0-87fc-628ab428482b\") " pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.271457 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.274237 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.282708 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.306529 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-mx7pg"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.307955 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.333286 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.340593 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" podStartSLOduration=3.340552185 podStartE2EDuration="3.340552185s" podCreationTimestamp="2025-11-25 14:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:33.234409156 +0000 UTC m=+1321.577518580" watchObservedRunningTime="2025-11-25 14:46:33.340552185 +0000 UTC m=+1321.683661609" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.362141 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mx7pg"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.369132 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzfhd\" (UniqueName: \"kubernetes.io/projected/c147d18a-4e11-41c0-87fc-628ab428482b-kube-api-access-hzfhd\") pod \"nova-api-f0b0-account-create-h8v4z\" (UID: \"c147d18a-4e11-41c0-87fc-628ab428482b\") " pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.369209 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.369302 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c147d18a-4e11-41c0-87fc-628ab428482b-operator-scripts\") pod \"nova-api-f0b0-account-create-h8v4z\" (UID: \"c147d18a-4e11-41c0-87fc-628ab428482b\") " pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.369327 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.369350 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-config-data\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.369457 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgm2c\" (UniqueName: \"kubernetes.io/projected/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-kube-api-access-mgm2c\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.369490 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.369513 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-scripts\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.371714 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c147d18a-4e11-41c0-87fc-628ab428482b-operator-scripts\") pod \"nova-api-f0b0-account-create-h8v4z\" (UID: \"c147d18a-4e11-41c0-87fc-628ab428482b\") " pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.381078 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.395599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzfhd\" (UniqueName: \"kubernetes.io/projected/c147d18a-4e11-41c0-87fc-628ab428482b-kube-api-access-hzfhd\") pod \"nova-api-f0b0-account-create-h8v4z\" (UID: \"c147d18a-4e11-41c0-87fc-628ab428482b\") " pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.442643 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-47c0-account-create-92x8f"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.443716 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.450875 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.470759 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.470796 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-config-data\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.470867 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d784bc2-02d6-423f-8286-74eae70a6986-operator-scripts\") pod \"nova-cell0-db-create-mx7pg\" (UID: \"7d784bc2-02d6-423f-8286-74eae70a6986\") " pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.470891 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.470906 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgm2c\" (UniqueName: \"kubernetes.io/projected/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-kube-api-access-mgm2c\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.471011 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.471067 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-scripts\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.471235 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8gc2\" (UniqueName: \"kubernetes.io/projected/7d784bc2-02d6-423f-8286-74eae70a6986-kube-api-access-p8gc2\") pod \"nova-cell0-db-create-mx7pg\" (UID: \"7d784bc2-02d6-423f-8286-74eae70a6986\") " pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.471386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.475233 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.475390 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-config-data\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.475690 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4v5qd"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.476538 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-scripts\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.477835 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.482471 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.483908 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.484956 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-47c0-account-create-92x8f"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.487440 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgm2c\" (UniqueName: \"kubernetes.io/projected/7ac2f3b3-e1cc-4536-b6b3-eacb46b887db-kube-api-access-mgm2c\") pod \"cinder-scheduler-0\" (UID: \"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db\") " pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.501097 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4v5qd"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.551434 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-476a-account-create-kw4p2"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.552476 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.558756 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-476a-account-create-kw4p2"] Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.559009 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.572642 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44da81f-aeed-45e1-b7bc-3f4608f077f9-operator-scripts\") pod \"nova-cell0-47c0-account-create-92x8f\" (UID: \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\") " pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.572672 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtvn\" (UniqueName: \"kubernetes.io/projected/df314672-6ee5-4768-b4e4-34df7f3abfd1-kube-api-access-ldtvn\") pod \"nova-cell1-db-create-4v5qd\" (UID: \"df314672-6ee5-4768-b4e4-34df7f3abfd1\") " pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.572730 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d784bc2-02d6-423f-8286-74eae70a6986-operator-scripts\") pod \"nova-cell0-db-create-mx7pg\" (UID: \"7d784bc2-02d6-423f-8286-74eae70a6986\") " pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.572803 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8gc2\" (UniqueName: \"kubernetes.io/projected/7d784bc2-02d6-423f-8286-74eae70a6986-kube-api-access-p8gc2\") pod \"nova-cell0-db-create-mx7pg\" (UID: \"7d784bc2-02d6-423f-8286-74eae70a6986\") " pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.572840 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsgfb\" (UniqueName: \"kubernetes.io/projected/d44da81f-aeed-45e1-b7bc-3f4608f077f9-kube-api-access-nsgfb\") pod \"nova-cell0-47c0-account-create-92x8f\" (UID: \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\") " pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.572860 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df314672-6ee5-4768-b4e4-34df7f3abfd1-operator-scripts\") pod \"nova-cell1-db-create-4v5qd\" (UID: \"df314672-6ee5-4768-b4e4-34df7f3abfd1\") " pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.573808 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d784bc2-02d6-423f-8286-74eae70a6986-operator-scripts\") pod \"nova-cell0-db-create-mx7pg\" (UID: \"7d784bc2-02d6-423f-8286-74eae70a6986\") " pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.589423 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8gc2\" (UniqueName: \"kubernetes.io/projected/7d784bc2-02d6-423f-8286-74eae70a6986-kube-api-access-p8gc2\") pod \"nova-cell0-db-create-mx7pg\" (UID: \"7d784bc2-02d6-423f-8286-74eae70a6986\") " pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.642409 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.653903 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.674425 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44da81f-aeed-45e1-b7bc-3f4608f077f9-operator-scripts\") pod \"nova-cell0-47c0-account-create-92x8f\" (UID: \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\") " pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.674740 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtvn\" (UniqueName: \"kubernetes.io/projected/df314672-6ee5-4768-b4e4-34df7f3abfd1-kube-api-access-ldtvn\") pod \"nova-cell1-db-create-4v5qd\" (UID: \"df314672-6ee5-4768-b4e4-34df7f3abfd1\") " pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.674801 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmjz\" (UniqueName: \"kubernetes.io/projected/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-kube-api-access-rfmjz\") pod \"nova-cell1-476a-account-create-kw4p2\" (UID: \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\") " pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.674946 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-operator-scripts\") pod \"nova-cell1-476a-account-create-kw4p2\" (UID: \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\") " pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.675031 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsgfb\" (UniqueName: \"kubernetes.io/projected/d44da81f-aeed-45e1-b7bc-3f4608f077f9-kube-api-access-nsgfb\") pod \"nova-cell0-47c0-account-create-92x8f\" (UID: \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\") " pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.675078 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df314672-6ee5-4768-b4e4-34df7f3abfd1-operator-scripts\") pod \"nova-cell1-db-create-4v5qd\" (UID: \"df314672-6ee5-4768-b4e4-34df7f3abfd1\") " pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.676346 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df314672-6ee5-4768-b4e4-34df7f3abfd1-operator-scripts\") pod \"nova-cell1-db-create-4v5qd\" (UID: \"df314672-6ee5-4768-b4e4-34df7f3abfd1\") " pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.677395 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44da81f-aeed-45e1-b7bc-3f4608f077f9-operator-scripts\") pod \"nova-cell0-47c0-account-create-92x8f\" (UID: \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\") " pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.697041 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsgfb\" (UniqueName: \"kubernetes.io/projected/d44da81f-aeed-45e1-b7bc-3f4608f077f9-kube-api-access-nsgfb\") pod \"nova-cell0-47c0-account-create-92x8f\" (UID: \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\") " pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.763016 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtvn\" (UniqueName: \"kubernetes.io/projected/df314672-6ee5-4768-b4e4-34df7f3abfd1-kube-api-access-ldtvn\") pod \"nova-cell1-db-create-4v5qd\" (UID: \"df314672-6ee5-4768-b4e4-34df7f3abfd1\") " pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.779888 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-operator-scripts\") pod \"nova-cell1-476a-account-create-kw4p2\" (UID: \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\") " pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.780068 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfmjz\" (UniqueName: \"kubernetes.io/projected/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-kube-api-access-rfmjz\") pod \"nova-cell1-476a-account-create-kw4p2\" (UID: \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\") " pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.790825 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-operator-scripts\") pod \"nova-cell1-476a-account-create-kw4p2\" (UID: \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\") " pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.810863 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfmjz\" (UniqueName: \"kubernetes.io/projected/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-kube-api-access-rfmjz\") pod \"nova-cell1-476a-account-create-kw4p2\" (UID: \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\") " pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.870695 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.893949 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:33 crc kubenswrapper[4796]: I1125 14:46:33.896001 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:34 crc kubenswrapper[4796]: I1125 14:46:34.043862 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-l2nn8"] Nov 25 14:46:34 crc kubenswrapper[4796]: I1125 14:46:34.231629 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l2nn8" event={"ID":"391eabca-f0e8-49c8-b98b-c495a39f9d46","Type":"ContainerStarted","Data":"e439923ca8cb44ac62375330ad9db50308ae7319796f1b7a2419705a9c5ee4e0"} Nov 25 14:46:34 crc kubenswrapper[4796]: I1125 14:46:34.235489 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 14:46:34 crc kubenswrapper[4796]: I1125 14:46:34.328209 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f0b0-account-create-h8v4z"] Nov 25 14:46:34 crc kubenswrapper[4796]: I1125 14:46:34.461298 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef983792-84b3-4cd2-86be-253f5619b093" path="/var/lib/kubelet/pods/ef983792-84b3-4cd2-86be-253f5619b093/volumes" Nov 25 14:46:34 crc kubenswrapper[4796]: I1125 14:46:34.462916 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mx7pg"] Nov 25 14:46:34 crc kubenswrapper[4796]: I1125 14:46:34.666493 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-476a-account-create-kw4p2"] Nov 25 14:46:34 crc kubenswrapper[4796]: I1125 14:46:34.686478 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4v5qd"] Nov 25 14:46:34 crc kubenswrapper[4796]: I1125 14:46:34.700115 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-47c0-account-create-92x8f"] Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.256725 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-476a-account-create-kw4p2" event={"ID":"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9","Type":"ContainerStarted","Data":"2c820b08564400d58237462f8ede8e7588c97b0ce47919f4edecbae522bcad32"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.256973 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-476a-account-create-kw4p2" event={"ID":"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9","Type":"ContainerStarted","Data":"dd675479cd13bda8968bfee5c65e93fb6b22a0f5cc8521a8c09fe2f60feeedb4"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.258849 4796 generic.go:334] "Generic (PLEG): container finished" podID="7d784bc2-02d6-423f-8286-74eae70a6986" containerID="b27d4c66e3d062f16abbcad6819f80948f578648bb56e4390908ae9603213037" exitCode=0 Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.258899 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mx7pg" event={"ID":"7d784bc2-02d6-423f-8286-74eae70a6986","Type":"ContainerDied","Data":"b27d4c66e3d062f16abbcad6819f80948f578648bb56e4390908ae9603213037"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.258916 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mx7pg" event={"ID":"7d784bc2-02d6-423f-8286-74eae70a6986","Type":"ContainerStarted","Data":"842a6e281184eb33e25ad8f2d173cc43b30557a994e2c5a81f5ca5637020cf52"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.263700 4796 generic.go:334] "Generic (PLEG): container finished" podID="c147d18a-4e11-41c0-87fc-628ab428482b" containerID="fa26e6e7264c7408342069a542d6d9878eeb2d814cd5084bb0807037339563cd" exitCode=0 Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.263828 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f0b0-account-create-h8v4z" event={"ID":"c147d18a-4e11-41c0-87fc-628ab428482b","Type":"ContainerDied","Data":"fa26e6e7264c7408342069a542d6d9878eeb2d814cd5084bb0807037339563cd"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.263855 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f0b0-account-create-h8v4z" event={"ID":"c147d18a-4e11-41c0-87fc-628ab428482b","Type":"ContainerStarted","Data":"fb59ba2aa645889797025af8878a402e9e060718ac4fd1673624c855f8049905"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.270215 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db","Type":"ContainerStarted","Data":"2635dd738d8e371d379e8667c97518cdbcf337930b677f977e5b2abdc335b849"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.273672 4796 generic.go:334] "Generic (PLEG): container finished" podID="391eabca-f0e8-49c8-b98b-c495a39f9d46" containerID="3c40d926051c3e978a39f64ee4616f5d1e54cf18a14935a1aad94403b40f713f" exitCode=0 Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.273751 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l2nn8" event={"ID":"391eabca-f0e8-49c8-b98b-c495a39f9d46","Type":"ContainerDied","Data":"3c40d926051c3e978a39f64ee4616f5d1e54cf18a14935a1aad94403b40f713f"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.283295 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4v5qd" event={"ID":"df314672-6ee5-4768-b4e4-34df7f3abfd1","Type":"ContainerStarted","Data":"5bb708f073b921fc079417bc4beb5addd37b59b0c39606708ed53c498561c69f"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.283338 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4v5qd" event={"ID":"df314672-6ee5-4768-b4e4-34df7f3abfd1","Type":"ContainerStarted","Data":"4e72e9c5846db2e0f1b793e0db1d63b9e0fc5bf515fc4a7e50f65df696a9d461"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.290995 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-47c0-account-create-92x8f" event={"ID":"d44da81f-aeed-45e1-b7bc-3f4608f077f9","Type":"ContainerStarted","Data":"791f3fd2b3fbfada116efc5867cd0e47785f4ce89f4ee623c27749f69c6fe7e0"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.291268 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-47c0-account-create-92x8f" event={"ID":"d44da81f-aeed-45e1-b7bc-3f4608f077f9","Type":"ContainerStarted","Data":"237054939cb2223256e766f6e21785dc17b475c5cbdc95cc096c9bc86314199c"} Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.308362 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-476a-account-create-kw4p2" podStartSLOduration=2.308346271 podStartE2EDuration="2.308346271s" podCreationTimestamp="2025-11-25 14:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:35.272504694 +0000 UTC m=+1323.615614118" watchObservedRunningTime="2025-11-25 14:46:35.308346271 +0000 UTC m=+1323.651455695" Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.381472 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-4v5qd" podStartSLOduration=2.381455 podStartE2EDuration="2.381455s" podCreationTimestamp="2025-11-25 14:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:35.336009533 +0000 UTC m=+1323.679118957" watchObservedRunningTime="2025-11-25 14:46:35.381455 +0000 UTC m=+1323.724564424" Nov 25 14:46:35 crc kubenswrapper[4796]: I1125 14:46:35.396514 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-47c0-account-create-92x8f" podStartSLOduration=2.396494789 podStartE2EDuration="2.396494789s" podCreationTimestamp="2025-11-25 14:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:35.378225359 +0000 UTC m=+1323.721334783" watchObservedRunningTime="2025-11-25 14:46:35.396494789 +0000 UTC m=+1323.739604213" Nov 25 14:46:35 crc kubenswrapper[4796]: W1125 14:46:35.662650 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d784bc2_02d6_423f_8286_74eae70a6986.slice/crio-conmon-b27d4c66e3d062f16abbcad6819f80948f578648bb56e4390908ae9603213037.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d784bc2_02d6_423f_8286_74eae70a6986.slice/crio-conmon-b27d4c66e3d062f16abbcad6819f80948f578648bb56e4390908ae9603213037.scope: no such file or directory Nov 25 14:46:35 crc kubenswrapper[4796]: W1125 14:46:35.662710 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d784bc2_02d6_423f_8286_74eae70a6986.slice/crio-b27d4c66e3d062f16abbcad6819f80948f578648bb56e4390908ae9603213037.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d784bc2_02d6_423f_8286_74eae70a6986.slice/crio-b27d4c66e3d062f16abbcad6819f80948f578648bb56e4390908ae9603213037.scope: no such file or directory Nov 25 14:46:35 crc kubenswrapper[4796]: W1125 14:46:35.662903 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd203a2c3_7b83_4c3e_b151_d6fc27b0f4e9.slice/crio-conmon-2c820b08564400d58237462f8ede8e7588c97b0ce47919f4edecbae522bcad32.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd203a2c3_7b83_4c3e_b151_d6fc27b0f4e9.slice/crio-conmon-2c820b08564400d58237462f8ede8e7588c97b0ce47919f4edecbae522bcad32.scope: no such file or directory Nov 25 14:46:35 crc kubenswrapper[4796]: W1125 14:46:35.662923 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd203a2c3_7b83_4c3e_b151_d6fc27b0f4e9.slice/crio-2c820b08564400d58237462f8ede8e7588c97b0ce47919f4edecbae522bcad32.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd203a2c3_7b83_4c3e_b151_d6fc27b0f4e9.slice/crio-2c820b08564400d58237462f8ede8e7588c97b0ce47919f4edecbae522bcad32.scope: no such file or directory Nov 25 14:46:35 crc kubenswrapper[4796]: W1125 14:46:35.662940 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf314672_6ee5_4768_b4e4_34df7f3abfd1.slice/crio-conmon-5bb708f073b921fc079417bc4beb5addd37b59b0c39606708ed53c498561c69f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf314672_6ee5_4768_b4e4_34df7f3abfd1.slice/crio-conmon-5bb708f073b921fc079417bc4beb5addd37b59b0c39606708ed53c498561c69f.scope: no such file or directory Nov 25 14:46:35 crc kubenswrapper[4796]: W1125 14:46:35.662958 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf314672_6ee5_4768_b4e4_34df7f3abfd1.slice/crio-5bb708f073b921fc079417bc4beb5addd37b59b0c39606708ed53c498561c69f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf314672_6ee5_4768_b4e4_34df7f3abfd1.slice/crio-5bb708f073b921fc079417bc4beb5addd37b59b0c39606708ed53c498561c69f.scope: no such file or directory Nov 25 14:46:35 crc kubenswrapper[4796]: W1125 14:46:35.663001 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd44da81f_aeed_45e1_b7bc_3f4608f077f9.slice/crio-conmon-791f3fd2b3fbfada116efc5867cd0e47785f4ce89f4ee623c27749f69c6fe7e0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd44da81f_aeed_45e1_b7bc_3f4608f077f9.slice/crio-conmon-791f3fd2b3fbfada116efc5867cd0e47785f4ce89f4ee623c27749f69c6fe7e0.scope: no such file or directory Nov 25 14:46:35 crc kubenswrapper[4796]: W1125 14:46:35.663016 4796 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd44da81f_aeed_45e1_b7bc_3f4608f077f9.slice/crio-791f3fd2b3fbfada116efc5867cd0e47785f4ce89f4ee623c27749f69c6fe7e0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd44da81f_aeed_45e1_b7bc_3f4608f077f9.slice/crio-791f3fd2b3fbfada116efc5867cd0e47785f4ce89f4ee623c27749f69c6fe7e0.scope: no such file or directory Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.242393 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.311789 4796 generic.go:334] "Generic (PLEG): container finished" podID="d44da81f-aeed-45e1-b7bc-3f4608f077f9" containerID="791f3fd2b3fbfada116efc5867cd0e47785f4ce89f4ee623c27749f69c6fe7e0" exitCode=0 Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.311872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-47c0-account-create-92x8f" event={"ID":"d44da81f-aeed-45e1-b7bc-3f4608f077f9","Type":"ContainerDied","Data":"791f3fd2b3fbfada116efc5867cd0e47785f4ce89f4ee623c27749f69c6fe7e0"} Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.317090 4796 generic.go:334] "Generic (PLEG): container finished" podID="d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9" containerID="2c820b08564400d58237462f8ede8e7588c97b0ce47919f4edecbae522bcad32" exitCode=0 Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.317151 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-476a-account-create-kw4p2" event={"ID":"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9","Type":"ContainerDied","Data":"2c820b08564400d58237462f8ede8e7588c97b0ce47919f4edecbae522bcad32"} Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.325343 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db","Type":"ContainerStarted","Data":"c82deb1d070bff94f2898c6da48d6d003ba41712ae5e171063c59e44b5dc126c"} Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.325410 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7ac2f3b3-e1cc-4536-b6b3-eacb46b887db","Type":"ContainerStarted","Data":"2cacea83ecf973f42552139a0133706c829fc0f421bb99fb92c0c605fc2733cb"} Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.339235 4796 generic.go:334] "Generic (PLEG): container finished" podID="c801ed15-ca44-4901-a402-93224e1e73b6" containerID="4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf" exitCode=0 Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.339483 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerDied","Data":"4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf"} Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.339617 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c801ed15-ca44-4901-a402-93224e1e73b6","Type":"ContainerDied","Data":"ae7e7720d5b5c010171f4df9e4465d725dd0afbb2bd35159242ae16e79391f6e"} Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.339704 4796 scope.go:117] "RemoveContainer" containerID="0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.339897 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.349591 4796 generic.go:334] "Generic (PLEG): container finished" podID="df314672-6ee5-4768-b4e4-34df7f3abfd1" containerID="5bb708f073b921fc079417bc4beb5addd37b59b0c39606708ed53c498561c69f" exitCode=0 Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.349907 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4v5qd" event={"ID":"df314672-6ee5-4768-b4e4-34df7f3abfd1","Type":"ContainerDied","Data":"5bb708f073b921fc079417bc4beb5addd37b59b0c39606708ed53c498561c69f"} Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.360413 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-run-httpd\") pod \"c801ed15-ca44-4901-a402-93224e1e73b6\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.360492 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-scripts\") pod \"c801ed15-ca44-4901-a402-93224e1e73b6\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.360559 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-config-data\") pod \"c801ed15-ca44-4901-a402-93224e1e73b6\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.360618 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cppk\" (UniqueName: \"kubernetes.io/projected/c801ed15-ca44-4901-a402-93224e1e73b6-kube-api-access-8cppk\") pod \"c801ed15-ca44-4901-a402-93224e1e73b6\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.360723 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-combined-ca-bundle\") pod \"c801ed15-ca44-4901-a402-93224e1e73b6\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.360794 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-log-httpd\") pod \"c801ed15-ca44-4901-a402-93224e1e73b6\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.360923 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-sg-core-conf-yaml\") pod \"c801ed15-ca44-4901-a402-93224e1e73b6\" (UID: \"c801ed15-ca44-4901-a402-93224e1e73b6\") " Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.360913 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c801ed15-ca44-4901-a402-93224e1e73b6" (UID: "c801ed15-ca44-4901-a402-93224e1e73b6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.366009 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c801ed15-ca44-4901-a402-93224e1e73b6" (UID: "c801ed15-ca44-4901-a402-93224e1e73b6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.370764 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-scripts" (OuterVolumeSpecName: "scripts") pod "c801ed15-ca44-4901-a402-93224e1e73b6" (UID: "c801ed15-ca44-4901-a402-93224e1e73b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.374752 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c801ed15-ca44-4901-a402-93224e1e73b6-kube-api-access-8cppk" (OuterVolumeSpecName: "kube-api-access-8cppk") pod "c801ed15-ca44-4901-a402-93224e1e73b6" (UID: "c801ed15-ca44-4901-a402-93224e1e73b6"). InnerVolumeSpecName "kube-api-access-8cppk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.387133 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.387111931 podStartE2EDuration="3.387111931s" podCreationTimestamp="2025-11-25 14:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:46:36.365988942 +0000 UTC m=+1324.709098376" watchObservedRunningTime="2025-11-25 14:46:36.387111931 +0000 UTC m=+1324.730221355" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.463389 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cppk\" (UniqueName: \"kubernetes.io/projected/c801ed15-ca44-4901-a402-93224e1e73b6-kube-api-access-8cppk\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.463427 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.463439 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c801ed15-ca44-4901-a402-93224e1e73b6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.463450 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.465039 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c801ed15-ca44-4901-a402-93224e1e73b6" (UID: "c801ed15-ca44-4901-a402-93224e1e73b6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.485652 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c801ed15-ca44-4901-a402-93224e1e73b6" (UID: "c801ed15-ca44-4901-a402-93224e1e73b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.545436 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-config-data" (OuterVolumeSpecName: "config-data") pod "c801ed15-ca44-4901-a402-93224e1e73b6" (UID: "c801ed15-ca44-4901-a402-93224e1e73b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.568799 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.568829 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.568838 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c801ed15-ca44-4901-a402-93224e1e73b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.606408 4796 scope.go:117] "RemoveContainer" containerID="59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.640255 4796 scope.go:117] "RemoveContainer" containerID="4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.678736 4796 scope.go:117] "RemoveContainer" containerID="168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.695537 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.715624 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.728922 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:36 crc kubenswrapper[4796]: E1125 14:46:36.729332 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="proxy-httpd" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.729349 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="proxy-httpd" Nov 25 14:46:36 crc kubenswrapper[4796]: E1125 14:46:36.729364 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="ceilometer-notification-agent" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.729371 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="ceilometer-notification-agent" Nov 25 14:46:36 crc kubenswrapper[4796]: E1125 14:46:36.729385 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="ceilometer-central-agent" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.729391 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="ceilometer-central-agent" Nov 25 14:46:36 crc kubenswrapper[4796]: E1125 14:46:36.729413 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="sg-core" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.729419 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="sg-core" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.729644 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="ceilometer-central-agent" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.729664 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="ceilometer-notification-agent" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.729676 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="sg-core" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.729687 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" containerName="proxy-httpd" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.731484 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.735874 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.743944 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.744131 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.793468 4796 scope.go:117] "RemoveContainer" containerID="0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028" Nov 25 14:46:36 crc kubenswrapper[4796]: E1125 14:46:36.794096 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028\": container with ID starting with 0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028 not found: ID does not exist" containerID="0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.794126 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028"} err="failed to get container status \"0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028\": rpc error: code = NotFound desc = could not find container \"0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028\": container with ID starting with 0274c4e9a039f9270587c10ec610672b81396bd264e071d47e27a007e7461028 not found: ID does not exist" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.794151 4796 scope.go:117] "RemoveContainer" containerID="59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f" Nov 25 14:46:36 crc kubenswrapper[4796]: E1125 14:46:36.794667 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f\": container with ID starting with 59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f not found: ID does not exist" containerID="59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.794685 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f"} err="failed to get container status \"59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f\": rpc error: code = NotFound desc = could not find container \"59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f\": container with ID starting with 59656fbccd507e6664b6d6f687610fd80a9b1ea776dbb959a36816230209d07f not found: ID does not exist" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.794697 4796 scope.go:117] "RemoveContainer" containerID="4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf" Nov 25 14:46:36 crc kubenswrapper[4796]: E1125 14:46:36.795335 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf\": container with ID starting with 4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf not found: ID does not exist" containerID="4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.795454 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf"} err="failed to get container status \"4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf\": rpc error: code = NotFound desc = could not find container \"4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf\": container with ID starting with 4074c27cdd1e17737dce570bb0eed54f64058797bde6664d4a3f706ebcff26bf not found: ID does not exist" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.795494 4796 scope.go:117] "RemoveContainer" containerID="168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740" Nov 25 14:46:36 crc kubenswrapper[4796]: E1125 14:46:36.795858 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740\": container with ID starting with 168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740 not found: ID does not exist" containerID="168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.795895 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740"} err="failed to get container status \"168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740\": rpc error: code = NotFound desc = could not find container \"168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740\": container with ID starting with 168f4059f8dd3694c25f3ec08f457f26483bf1259f2a594f4e6582d3d8485740 not found: ID does not exist" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.840070 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.876594 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.876664 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-scripts\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.876729 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-config-data\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.876746 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-run-httpd\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.876775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.876808 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99qls\" (UniqueName: \"kubernetes.io/projected/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-kube-api-access-99qls\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.877363 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-log-httpd\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.980284 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8gc2\" (UniqueName: \"kubernetes.io/projected/7d784bc2-02d6-423f-8286-74eae70a6986-kube-api-access-p8gc2\") pod \"7d784bc2-02d6-423f-8286-74eae70a6986\" (UID: \"7d784bc2-02d6-423f-8286-74eae70a6986\") " Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.980351 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d784bc2-02d6-423f-8286-74eae70a6986-operator-scripts\") pod \"7d784bc2-02d6-423f-8286-74eae70a6986\" (UID: \"7d784bc2-02d6-423f-8286-74eae70a6986\") " Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.980673 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.980714 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-scripts\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.980761 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-config-data\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.980778 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-run-httpd\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.980802 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.980833 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99qls\" (UniqueName: \"kubernetes.io/projected/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-kube-api-access-99qls\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.980891 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-log-httpd\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.981318 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d784bc2-02d6-423f-8286-74eae70a6986-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d784bc2-02d6-423f-8286-74eae70a6986" (UID: "7d784bc2-02d6-423f-8286-74eae70a6986"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.982218 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.988878 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.989274 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.989705 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-config-data\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.991859 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-log-httpd\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.993980 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d784bc2-02d6-423f-8286-74eae70a6986-kube-api-access-p8gc2" (OuterVolumeSpecName: "kube-api-access-p8gc2") pod "7d784bc2-02d6-423f-8286-74eae70a6986" (UID: "7d784bc2-02d6-423f-8286-74eae70a6986"). InnerVolumeSpecName "kube-api-access-p8gc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:36 crc kubenswrapper[4796]: I1125 14:46:36.999910 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-run-httpd\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.002241 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99qls\" (UniqueName: \"kubernetes.io/projected/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-kube-api-access-99qls\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.002605 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.002649 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-scripts\") pod \"ceilometer-0\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " pod="openstack/ceilometer-0" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.060514 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.082100 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.083221 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzfhd\" (UniqueName: \"kubernetes.io/projected/c147d18a-4e11-41c0-87fc-628ab428482b-kube-api-access-hzfhd\") pod \"c147d18a-4e11-41c0-87fc-628ab428482b\" (UID: \"c147d18a-4e11-41c0-87fc-628ab428482b\") " Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.083280 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c147d18a-4e11-41c0-87fc-628ab428482b-operator-scripts\") pod \"c147d18a-4e11-41c0-87fc-628ab428482b\" (UID: \"c147d18a-4e11-41c0-87fc-628ab428482b\") " Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.083319 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391eabca-f0e8-49c8-b98b-c495a39f9d46-operator-scripts\") pod \"391eabca-f0e8-49c8-b98b-c495a39f9d46\" (UID: \"391eabca-f0e8-49c8-b98b-c495a39f9d46\") " Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.083451 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t7zk\" (UniqueName: \"kubernetes.io/projected/391eabca-f0e8-49c8-b98b-c495a39f9d46-kube-api-access-5t7zk\") pod \"391eabca-f0e8-49c8-b98b-c495a39f9d46\" (UID: \"391eabca-f0e8-49c8-b98b-c495a39f9d46\") " Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.084225 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8gc2\" (UniqueName: \"kubernetes.io/projected/7d784bc2-02d6-423f-8286-74eae70a6986-kube-api-access-p8gc2\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.084249 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d784bc2-02d6-423f-8286-74eae70a6986-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.085132 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c147d18a-4e11-41c0-87fc-628ab428482b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c147d18a-4e11-41c0-87fc-628ab428482b" (UID: "c147d18a-4e11-41c0-87fc-628ab428482b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.085471 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/391eabca-f0e8-49c8-b98b-c495a39f9d46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "391eabca-f0e8-49c8-b98b-c495a39f9d46" (UID: "391eabca-f0e8-49c8-b98b-c495a39f9d46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.094777 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c147d18a-4e11-41c0-87fc-628ab428482b-kube-api-access-hzfhd" (OuterVolumeSpecName: "kube-api-access-hzfhd") pod "c147d18a-4e11-41c0-87fc-628ab428482b" (UID: "c147d18a-4e11-41c0-87fc-628ab428482b"). InnerVolumeSpecName "kube-api-access-hzfhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.098922 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/391eabca-f0e8-49c8-b98b-c495a39f9d46-kube-api-access-5t7zk" (OuterVolumeSpecName: "kube-api-access-5t7zk") pod "391eabca-f0e8-49c8-b98b-c495a39f9d46" (UID: "391eabca-f0e8-49c8-b98b-c495a39f9d46"). InnerVolumeSpecName "kube-api-access-5t7zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.156763 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.188439 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-ovndb-tls-certs\") pod \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.188483 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk92c\" (UniqueName: \"kubernetes.io/projected/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-kube-api-access-hk92c\") pod \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.188523 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-combined-ca-bundle\") pod \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.188607 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-config\") pod \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.188644 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-httpd-config\") pod \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\" (UID: \"d8d5b61c-f184-4963-ba9f-9cf698fd8e60\") " Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.189008 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t7zk\" (UniqueName: \"kubernetes.io/projected/391eabca-f0e8-49c8-b98b-c495a39f9d46-kube-api-access-5t7zk\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.189026 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzfhd\" (UniqueName: \"kubernetes.io/projected/c147d18a-4e11-41c0-87fc-628ab428482b-kube-api-access-hzfhd\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.189037 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c147d18a-4e11-41c0-87fc-628ab428482b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.189046 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/391eabca-f0e8-49c8-b98b-c495a39f9d46-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.195376 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d8d5b61c-f184-4963-ba9f-9cf698fd8e60" (UID: "d8d5b61c-f184-4963-ba9f-9cf698fd8e60"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.204866 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-kube-api-access-hk92c" (OuterVolumeSpecName: "kube-api-access-hk92c") pod "d8d5b61c-f184-4963-ba9f-9cf698fd8e60" (UID: "d8d5b61c-f184-4963-ba9f-9cf698fd8e60"). InnerVolumeSpecName "kube-api-access-hk92c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.283510 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-config" (OuterVolumeSpecName: "config") pod "d8d5b61c-f184-4963-ba9f-9cf698fd8e60" (UID: "d8d5b61c-f184-4963-ba9f-9cf698fd8e60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.291893 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk92c\" (UniqueName: \"kubernetes.io/projected/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-kube-api-access-hk92c\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.291934 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.291945 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.337267 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d8d5b61c-f184-4963-ba9f-9cf698fd8e60" (UID: "d8d5b61c-f184-4963-ba9f-9cf698fd8e60"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.339286 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8d5b61c-f184-4963-ba9f-9cf698fd8e60" (UID: "d8d5b61c-f184-4963-ba9f-9cf698fd8e60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.387301 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l2nn8" event={"ID":"391eabca-f0e8-49c8-b98b-c495a39f9d46","Type":"ContainerDied","Data":"e439923ca8cb44ac62375330ad9db50308ae7319796f1b7a2419705a9c5ee4e0"} Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.387345 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e439923ca8cb44ac62375330ad9db50308ae7319796f1b7a2419705a9c5ee4e0" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.387435 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l2nn8" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.392012 4796 generic.go:334] "Generic (PLEG): container finished" podID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerID="01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9" exitCode=0 Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.392079 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9cd6669d-kmwxf" event={"ID":"d8d5b61c-f184-4963-ba9f-9cf698fd8e60","Type":"ContainerDied","Data":"01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9"} Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.392206 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9cd6669d-kmwxf" event={"ID":"d8d5b61c-f184-4963-ba9f-9cf698fd8e60","Type":"ContainerDied","Data":"63ab74923ecd7fb3bfbe77b3039dd96ded550e3a72d848e79eff6991e690c9cd"} Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.392236 4796 scope.go:117] "RemoveContainer" containerID="c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.392639 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f9cd6669d-kmwxf" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.407911 4796 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.407954 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d5b61c-f184-4963-ba9f-9cf698fd8e60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.425554 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mx7pg" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.425683 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mx7pg" event={"ID":"7d784bc2-02d6-423f-8286-74eae70a6986","Type":"ContainerDied","Data":"842a6e281184eb33e25ad8f2d173cc43b30557a994e2c5a81f5ca5637020cf52"} Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.425720 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842a6e281184eb33e25ad8f2d173cc43b30557a994e2c5a81f5ca5637020cf52" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.445031 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f0b0-account-create-h8v4z" event={"ID":"c147d18a-4e11-41c0-87fc-628ab428482b","Type":"ContainerDied","Data":"fb59ba2aa645889797025af8878a402e9e060718ac4fd1673624c855f8049905"} Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.445078 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb59ba2aa645889797025af8878a402e9e060718ac4fd1673624c855f8049905" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.446782 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f0b0-account-create-h8v4z" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.454119 4796 scope.go:117] "RemoveContainer" containerID="01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.480146 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f9cd6669d-kmwxf"] Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.493600 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f9cd6669d-kmwxf"] Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.496110 4796 scope.go:117] "RemoveContainer" containerID="c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb" Nov 25 14:46:37 crc kubenswrapper[4796]: E1125 14:46:37.498011 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb\": container with ID starting with c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb not found: ID does not exist" containerID="c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.498063 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb"} err="failed to get container status \"c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb\": rpc error: code = NotFound desc = could not find container \"c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb\": container with ID starting with c8fd2c83594bc4161ed256d9a8b2042d8fb823c113c4eb68b03349d6af746dbb not found: ID does not exist" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.498088 4796 scope.go:117] "RemoveContainer" containerID="01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9" Nov 25 14:46:37 crc kubenswrapper[4796]: E1125 14:46:37.498496 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9\": container with ID starting with 01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9 not found: ID does not exist" containerID="01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.498529 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9"} err="failed to get container status \"01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9\": rpc error: code = NotFound desc = could not find container \"01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9\": container with ID starting with 01bdb9cd08a222ecf6317a59ee836ade4f1c0f917baeb824370b477ded583ac9 not found: ID does not exist" Nov 25 14:46:37 crc kubenswrapper[4796]: I1125 14:46:37.952598 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.018071 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.044997 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.095962 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.123338 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsgfb\" (UniqueName: \"kubernetes.io/projected/d44da81f-aeed-45e1-b7bc-3f4608f077f9-kube-api-access-nsgfb\") pod \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\" (UID: \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\") " Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.123381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44da81f-aeed-45e1-b7bc-3f4608f077f9-operator-scripts\") pod \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\" (UID: \"d44da81f-aeed-45e1-b7bc-3f4608f077f9\") " Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.123481 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-operator-scripts\") pod \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\" (UID: \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\") " Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.123505 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfmjz\" (UniqueName: \"kubernetes.io/projected/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-kube-api-access-rfmjz\") pod \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\" (UID: \"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9\") " Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.125985 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44da81f-aeed-45e1-b7bc-3f4608f077f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d44da81f-aeed-45e1-b7bc-3f4608f077f9" (UID: "d44da81f-aeed-45e1-b7bc-3f4608f077f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.126637 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9" (UID: "d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.129749 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44da81f-aeed-45e1-b7bc-3f4608f077f9-kube-api-access-nsgfb" (OuterVolumeSpecName: "kube-api-access-nsgfb") pod "d44da81f-aeed-45e1-b7bc-3f4608f077f9" (UID: "d44da81f-aeed-45e1-b7bc-3f4608f077f9"). InnerVolumeSpecName "kube-api-access-nsgfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.133015 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-kube-api-access-rfmjz" (OuterVolumeSpecName: "kube-api-access-rfmjz") pod "d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9" (UID: "d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9"). InnerVolumeSpecName "kube-api-access-rfmjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.226364 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df314672-6ee5-4768-b4e4-34df7f3abfd1-operator-scripts\") pod \"df314672-6ee5-4768-b4e4-34df7f3abfd1\" (UID: \"df314672-6ee5-4768-b4e4-34df7f3abfd1\") " Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.226744 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldtvn\" (UniqueName: \"kubernetes.io/projected/df314672-6ee5-4768-b4e4-34df7f3abfd1-kube-api-access-ldtvn\") pod \"df314672-6ee5-4768-b4e4-34df7f3abfd1\" (UID: \"df314672-6ee5-4768-b4e4-34df7f3abfd1\") " Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.227101 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44da81f-aeed-45e1-b7bc-3f4608f077f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.227117 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsgfb\" (UniqueName: \"kubernetes.io/projected/d44da81f-aeed-45e1-b7bc-3f4608f077f9-kube-api-access-nsgfb\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.227128 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.227136 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfmjz\" (UniqueName: \"kubernetes.io/projected/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9-kube-api-access-rfmjz\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.227800 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df314672-6ee5-4768-b4e4-34df7f3abfd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df314672-6ee5-4768-b4e4-34df7f3abfd1" (UID: "df314672-6ee5-4768-b4e4-34df7f3abfd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.232254 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df314672-6ee5-4768-b4e4-34df7f3abfd1-kube-api-access-ldtvn" (OuterVolumeSpecName: "kube-api-access-ldtvn") pod "df314672-6ee5-4768-b4e4-34df7f3abfd1" (UID: "df314672-6ee5-4768-b4e4-34df7f3abfd1"). InnerVolumeSpecName "kube-api-access-ldtvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.328432 4796 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df314672-6ee5-4768-b4e4-34df7f3abfd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.328742 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldtvn\" (UniqueName: \"kubernetes.io/projected/df314672-6ee5-4768-b4e4-34df7f3abfd1-kube-api-access-ldtvn\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.422198 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c801ed15-ca44-4901-a402-93224e1e73b6" path="/var/lib/kubelet/pods/c801ed15-ca44-4901-a402-93224e1e73b6/volumes" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.423092 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" path="/var/lib/kubelet/pods/d8d5b61c-f184-4963-ba9f-9cf698fd8e60/volumes" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.468977 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-47c0-account-create-92x8f" event={"ID":"d44da81f-aeed-45e1-b7bc-3f4608f077f9","Type":"ContainerDied","Data":"237054939cb2223256e766f6e21785dc17b475c5cbdc95cc096c9bc86314199c"} Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.469034 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="237054939cb2223256e766f6e21785dc17b475c5cbdc95cc096c9bc86314199c" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.469118 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-47c0-account-create-92x8f" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.478172 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-476a-account-create-kw4p2" event={"ID":"d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9","Type":"ContainerDied","Data":"dd675479cd13bda8968bfee5c65e93fb6b22a0f5cc8521a8c09fe2f60feeedb4"} Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.478204 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd675479cd13bda8968bfee5c65e93fb6b22a0f5cc8521a8c09fe2f60feeedb4" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.478262 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-476a-account-create-kw4p2" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.484026 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerStarted","Data":"ecdef76202ddd2598dbef40fa1a4a056e97da01e862af3e4713c6e5561455f77"} Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.489224 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4v5qd" event={"ID":"df314672-6ee5-4768-b4e4-34df7f3abfd1","Type":"ContainerDied","Data":"4e72e9c5846db2e0f1b793e0db1d63b9e0fc5bf515fc4a7e50f65df696a9d461"} Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.489250 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e72e9c5846db2e0f1b793e0db1d63b9e0fc5bf515fc4a7e50f65df696a9d461" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.489293 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4v5qd" Nov 25 14:46:38 crc kubenswrapper[4796]: I1125 14:46:38.648746 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 14:46:40 crc kubenswrapper[4796]: I1125 14:46:40.612446 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:41 crc kubenswrapper[4796]: I1125 14:46:41.510049 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:41 crc kubenswrapper[4796]: I1125 14:46:41.517205 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b6dc55d99-xcq8j" Nov 25 14:46:42 crc kubenswrapper[4796]: I1125 14:46:42.316747 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7cd9956864-5xkx5" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 25 14:46:42 crc kubenswrapper[4796]: I1125 14:46:42.317082 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.590592 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kpnjm"] Nov 25 14:46:43 crc kubenswrapper[4796]: E1125 14:46:43.591046 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391eabca-f0e8-49c8-b98b-c495a39f9d46" containerName="mariadb-database-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591061 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="391eabca-f0e8-49c8-b98b-c495a39f9d46" containerName="mariadb-database-create" Nov 25 14:46:43 crc kubenswrapper[4796]: E1125 14:46:43.591080 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerName="neutron-httpd" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591086 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerName="neutron-httpd" Nov 25 14:46:43 crc kubenswrapper[4796]: E1125 14:46:43.591105 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44da81f-aeed-45e1-b7bc-3f4608f077f9" containerName="mariadb-account-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591111 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44da81f-aeed-45e1-b7bc-3f4608f077f9" containerName="mariadb-account-create" Nov 25 14:46:43 crc kubenswrapper[4796]: E1125 14:46:43.591125 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df314672-6ee5-4768-b4e4-34df7f3abfd1" containerName="mariadb-database-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591131 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="df314672-6ee5-4768-b4e4-34df7f3abfd1" containerName="mariadb-database-create" Nov 25 14:46:43 crc kubenswrapper[4796]: E1125 14:46:43.591144 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerName="neutron-api" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591152 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerName="neutron-api" Nov 25 14:46:43 crc kubenswrapper[4796]: E1125 14:46:43.591167 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9" containerName="mariadb-account-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591173 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9" containerName="mariadb-account-create" Nov 25 14:46:43 crc kubenswrapper[4796]: E1125 14:46:43.591182 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d784bc2-02d6-423f-8286-74eae70a6986" containerName="mariadb-database-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591187 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d784bc2-02d6-423f-8286-74eae70a6986" containerName="mariadb-database-create" Nov 25 14:46:43 crc kubenswrapper[4796]: E1125 14:46:43.591198 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c147d18a-4e11-41c0-87fc-628ab428482b" containerName="mariadb-account-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591204 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c147d18a-4e11-41c0-87fc-628ab428482b" containerName="mariadb-account-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591385 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d784bc2-02d6-423f-8286-74eae70a6986" containerName="mariadb-database-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591397 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerName="neutron-api" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591407 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d5b61c-f184-4963-ba9f-9cf698fd8e60" containerName="neutron-httpd" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591418 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9" containerName="mariadb-account-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591428 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44da81f-aeed-45e1-b7bc-3f4608f077f9" containerName="mariadb-account-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591441 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c147d18a-4e11-41c0-87fc-628ab428482b" containerName="mariadb-account-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591452 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="df314672-6ee5-4768-b4e4-34df7f3abfd1" containerName="mariadb-database-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.591462 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="391eabca-f0e8-49c8-b98b-c495a39f9d46" containerName="mariadb-database-create" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.592087 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.594830 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.595120 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.595238 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rjcnz" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.604725 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kpnjm"] Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.744877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26sc\" (UniqueName: \"kubernetes.io/projected/33e7984e-9b94-436b-90f4-82e5253ac471-kube-api-access-r26sc\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.744947 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-scripts\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.744982 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-config-data\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.745013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.836334 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.846410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-config-data\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.846488 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.846647 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26sc\" (UniqueName: \"kubernetes.io/projected/33e7984e-9b94-436b-90f4-82e5253ac471-kube-api-access-r26sc\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.846777 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-scripts\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.855096 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-scripts\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.856743 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-config-data\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.860026 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.871676 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26sc\" (UniqueName: \"kubernetes.io/projected/33e7984e-9b94-436b-90f4-82e5253ac471-kube-api-access-r26sc\") pod \"nova-cell0-conductor-db-sync-kpnjm\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:43 crc kubenswrapper[4796]: I1125 14:46:43.933030 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:46:45 crc kubenswrapper[4796]: I1125 14:46:45.012751 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kpnjm"] Nov 25 14:46:45 crc kubenswrapper[4796]: I1125 14:46:45.604699 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kpnjm" event={"ID":"33e7984e-9b94-436b-90f4-82e5253ac471","Type":"ContainerStarted","Data":"ebf0702ffc50d8212b185b534a6531163e8d791318c40b8c061d10c7c32dfe24"} Nov 25 14:46:45 crc kubenswrapper[4796]: I1125 14:46:45.607750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"120f9ac5-531c-4821-b033-d4b316f6ea61","Type":"ContainerStarted","Data":"8b7c0d9bb7f68d79499f449e1444dafb486cf2fde5b2288b5fc3ae45ca14f92a"} Nov 25 14:46:45 crc kubenswrapper[4796]: I1125 14:46:45.610909 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerStarted","Data":"48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e"} Nov 25 14:46:45 crc kubenswrapper[4796]: I1125 14:46:45.623686 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.423485377 podStartE2EDuration="19.623667731s" podCreationTimestamp="2025-11-25 14:46:26 +0000 UTC" firstStartedPulling="2025-11-25 14:46:27.437332263 +0000 UTC m=+1315.780441687" lastFinishedPulling="2025-11-25 14:46:44.637514617 +0000 UTC m=+1332.980624041" observedRunningTime="2025-11-25 14:46:45.6185305 +0000 UTC m=+1333.961639944" watchObservedRunningTime="2025-11-25 14:46:45.623667731 +0000 UTC m=+1333.966777165" Nov 25 14:46:46 crc kubenswrapper[4796]: I1125 14:46:46.627850 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerStarted","Data":"14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34"} Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.647458 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerStarted","Data":"14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694"} Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.653895 4796 generic.go:334] "Generic (PLEG): container finished" podID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerID="ae453e3aaa7cbba30fd5bc3de23897fd5dc332bf3b291917085c8ce4126081c4" exitCode=137 Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.653932 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cd9956864-5xkx5" event={"ID":"23942f6c-a777-4b11-a51d-ccaee1fff6e7","Type":"ContainerDied","Data":"ae453e3aaa7cbba30fd5bc3de23897fd5dc332bf3b291917085c8ce4126081c4"} Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.653955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cd9956864-5xkx5" event={"ID":"23942f6c-a777-4b11-a51d-ccaee1fff6e7","Type":"ContainerDied","Data":"75003a311ed2d81553af263aec7a65ad04eab0b4da26091f438fb6fed28b70c7"} Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.653965 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75003a311ed2d81553af263aec7a65ad04eab0b4da26091f438fb6fed28b70c7" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.696328 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.825741 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-tls-certs\") pod \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.825835 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-scripts\") pod \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.825869 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-config-data\") pod \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.825889 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-combined-ca-bundle\") pod \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.825913 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j79g\" (UniqueName: \"kubernetes.io/projected/23942f6c-a777-4b11-a51d-ccaee1fff6e7-kube-api-access-7j79g\") pod \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.825942 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-secret-key\") pod \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.825992 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23942f6c-a777-4b11-a51d-ccaee1fff6e7-logs\") pod \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\" (UID: \"23942f6c-a777-4b11-a51d-ccaee1fff6e7\") " Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.827861 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23942f6c-a777-4b11-a51d-ccaee1fff6e7-logs" (OuterVolumeSpecName: "logs") pod "23942f6c-a777-4b11-a51d-ccaee1fff6e7" (UID: "23942f6c-a777-4b11-a51d-ccaee1fff6e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.828158 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/23942f6c-a777-4b11-a51d-ccaee1fff6e7-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.839824 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "23942f6c-a777-4b11-a51d-ccaee1fff6e7" (UID: "23942f6c-a777-4b11-a51d-ccaee1fff6e7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.840048 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23942f6c-a777-4b11-a51d-ccaee1fff6e7-kube-api-access-7j79g" (OuterVolumeSpecName: "kube-api-access-7j79g") pod "23942f6c-a777-4b11-a51d-ccaee1fff6e7" (UID: "23942f6c-a777-4b11-a51d-ccaee1fff6e7"). InnerVolumeSpecName "kube-api-access-7j79g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.860014 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-scripts" (OuterVolumeSpecName: "scripts") pod "23942f6c-a777-4b11-a51d-ccaee1fff6e7" (UID: "23942f6c-a777-4b11-a51d-ccaee1fff6e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.867255 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-config-data" (OuterVolumeSpecName: "config-data") pod "23942f6c-a777-4b11-a51d-ccaee1fff6e7" (UID: "23942f6c-a777-4b11-a51d-ccaee1fff6e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.871198 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23942f6c-a777-4b11-a51d-ccaee1fff6e7" (UID: "23942f6c-a777-4b11-a51d-ccaee1fff6e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.877176 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "23942f6c-a777-4b11-a51d-ccaee1fff6e7" (UID: "23942f6c-a777-4b11-a51d-ccaee1fff6e7"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.929883 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.929912 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.929921 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23942f6c-a777-4b11-a51d-ccaee1fff6e7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.929933 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.929942 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j79g\" (UniqueName: \"kubernetes.io/projected/23942f6c-a777-4b11-a51d-ccaee1fff6e7-kube-api-access-7j79g\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:47 crc kubenswrapper[4796]: I1125 14:46:47.929952 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/23942f6c-a777-4b11-a51d-ccaee1fff6e7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:48 crc kubenswrapper[4796]: I1125 14:46:48.660605 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cd9956864-5xkx5" Nov 25 14:46:48 crc kubenswrapper[4796]: I1125 14:46:48.688183 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cd9956864-5xkx5"] Nov 25 14:46:48 crc kubenswrapper[4796]: I1125 14:46:48.696034 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7cd9956864-5xkx5"] Nov 25 14:46:49 crc kubenswrapper[4796]: I1125 14:46:49.513874 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:46:49 crc kubenswrapper[4796]: I1125 14:46:49.513964 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:46:50 crc kubenswrapper[4796]: I1125 14:46:50.419408 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" path="/var/lib/kubelet/pods/23942f6c-a777-4b11-a51d-ccaee1fff6e7/volumes" Nov 25 14:46:56 crc kubenswrapper[4796]: I1125 14:46:56.744175 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerStarted","Data":"64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1"} Nov 25 14:46:56 crc kubenswrapper[4796]: I1125 14:46:56.744798 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 14:46:56 crc kubenswrapper[4796]: I1125 14:46:56.744320 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="sg-core" containerID="cri-o://14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694" gracePeriod=30 Nov 25 14:46:56 crc kubenswrapper[4796]: I1125 14:46:56.744308 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="proxy-httpd" containerID="cri-o://64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1" gracePeriod=30 Nov 25 14:46:56 crc kubenswrapper[4796]: I1125 14:46:56.744447 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="ceilometer-notification-agent" containerID="cri-o://14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34" gracePeriod=30 Nov 25 14:46:56 crc kubenswrapper[4796]: I1125 14:46:56.744274 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="ceilometer-central-agent" containerID="cri-o://48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e" gracePeriod=30 Nov 25 14:46:56 crc kubenswrapper[4796]: I1125 14:46:56.746043 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kpnjm" event={"ID":"33e7984e-9b94-436b-90f4-82e5253ac471","Type":"ContainerStarted","Data":"2b6c58762b86fc60c47b6ebb02dead1149ab2e31713bab09b35bd04c117eefea"} Nov 25 14:46:56 crc kubenswrapper[4796]: I1125 14:46:56.772050 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.864292157 podStartE2EDuration="20.77203294s" podCreationTimestamp="2025-11-25 14:46:36 +0000 UTC" firstStartedPulling="2025-11-25 14:46:37.974619732 +0000 UTC m=+1326.317729156" lastFinishedPulling="2025-11-25 14:46:55.882360515 +0000 UTC m=+1344.225469939" observedRunningTime="2025-11-25 14:46:56.770147681 +0000 UTC m=+1345.113257125" watchObservedRunningTime="2025-11-25 14:46:56.77203294 +0000 UTC m=+1345.115142364" Nov 25 14:46:56 crc kubenswrapper[4796]: I1125 14:46:56.796326 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kpnjm" podStartSLOduration=2.852228665 podStartE2EDuration="13.796303917s" podCreationTimestamp="2025-11-25 14:46:43 +0000 UTC" firstStartedPulling="2025-11-25 14:46:45.028167505 +0000 UTC m=+1333.371276939" lastFinishedPulling="2025-11-25 14:46:55.972242767 +0000 UTC m=+1344.315352191" observedRunningTime="2025-11-25 14:46:56.790600669 +0000 UTC m=+1345.133710123" watchObservedRunningTime="2025-11-25 14:46:56.796303917 +0000 UTC m=+1345.139413341" Nov 25 14:46:57 crc kubenswrapper[4796]: I1125 14:46:57.757081 4796 generic.go:334] "Generic (PLEG): container finished" podID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerID="64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1" exitCode=0 Nov 25 14:46:57 crc kubenswrapper[4796]: I1125 14:46:57.757126 4796 generic.go:334] "Generic (PLEG): container finished" podID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerID="14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694" exitCode=2 Nov 25 14:46:57 crc kubenswrapper[4796]: I1125 14:46:57.757133 4796 generic.go:334] "Generic (PLEG): container finished" podID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerID="48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e" exitCode=0 Nov 25 14:46:57 crc kubenswrapper[4796]: I1125 14:46:57.757150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerDied","Data":"64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1"} Nov 25 14:46:57 crc kubenswrapper[4796]: I1125 14:46:57.757205 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerDied","Data":"14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694"} Nov 25 14:46:57 crc kubenswrapper[4796]: I1125 14:46:57.757218 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerDied","Data":"48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e"} Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.328239 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.420226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-sg-core-conf-yaml\") pod \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.420277 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-run-httpd\") pod \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.420305 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-combined-ca-bundle\") pod \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.420442 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99qls\" (UniqueName: \"kubernetes.io/projected/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-kube-api-access-99qls\") pod \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.420466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-log-httpd\") pod \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.420498 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-config-data\") pod \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.420526 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-scripts\") pod \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\" (UID: \"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e\") " Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.421069 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" (UID: "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.421108 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" (UID: "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.430064 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-scripts" (OuterVolumeSpecName: "scripts") pod "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" (UID: "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.451890 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-kube-api-access-99qls" (OuterVolumeSpecName: "kube-api-access-99qls") pod "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" (UID: "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e"). InnerVolumeSpecName "kube-api-access-99qls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.473147 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" (UID: "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.517850 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" (UID: "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.522245 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.522269 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.522279 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.522290 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99qls\" (UniqueName: \"kubernetes.io/projected/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-kube-api-access-99qls\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.522300 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.522309 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.536377 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-config-data" (OuterVolumeSpecName: "config-data") pod "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" (UID: "61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.623417 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.768233 4796 generic.go:334] "Generic (PLEG): container finished" podID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerID="14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34" exitCode=0 Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.768282 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerDied","Data":"14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34"} Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.768308 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e","Type":"ContainerDied","Data":"ecdef76202ddd2598dbef40fa1a4a056e97da01e862af3e4713c6e5561455f77"} Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.768324 4796 scope.go:117] "RemoveContainer" containerID="64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.768321 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.788826 4796 scope.go:117] "RemoveContainer" containerID="14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.814064 4796 scope.go:117] "RemoveContainer" containerID="14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.820328 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.830924 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.833755 4796 scope.go:117] "RemoveContainer" containerID="48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.840336 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.840774 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="sg-core" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.840791 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="sg-core" Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.840828 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="ceilometer-central-agent" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.840835 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="ceilometer-central-agent" Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.840850 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="ceilometer-notification-agent" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.840857 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="ceilometer-notification-agent" Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.840871 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon-log" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.840877 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon-log" Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.840890 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="proxy-httpd" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.840897 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="proxy-httpd" Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.840910 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.840916 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.841116 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon-log" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.841135 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="ceilometer-central-agent" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.841146 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="sg-core" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.841156 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="23942f6c-a777-4b11-a51d-ccaee1fff6e7" containerName="horizon" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.841167 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="proxy-httpd" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.841177 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" containerName="ceilometer-notification-agent" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.842896 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.846452 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.847084 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.855094 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.902047 4796 scope.go:117] "RemoveContainer" containerID="64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1" Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.904884 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1\": container with ID starting with 64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1 not found: ID does not exist" containerID="64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.904956 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1"} err="failed to get container status \"64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1\": rpc error: code = NotFound desc = could not find container \"64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1\": container with ID starting with 64f03eceaa64ea2891129aa13f14d8ac603015c2f657915ec0b60d76351a69b1 not found: ID does not exist" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.904991 4796 scope.go:117] "RemoveContainer" containerID="14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694" Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.905517 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694\": container with ID starting with 14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694 not found: ID does not exist" containerID="14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.905550 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694"} err="failed to get container status \"14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694\": rpc error: code = NotFound desc = could not find container \"14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694\": container with ID starting with 14df181bf81c0fc880779bcf4665d81de00aa8b09f24f6b89bb5dc77c966e694 not found: ID does not exist" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.905583 4796 scope.go:117] "RemoveContainer" containerID="14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34" Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.905916 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34\": container with ID starting with 14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34 not found: ID does not exist" containerID="14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.905949 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34"} err="failed to get container status \"14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34\": rpc error: code = NotFound desc = could not find container \"14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34\": container with ID starting with 14d3beac5f512c002c4ecf364c81989ee8758fbd67c74c1e13605c4377841a34 not found: ID does not exist" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.905966 4796 scope.go:117] "RemoveContainer" containerID="48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e" Nov 25 14:46:58 crc kubenswrapper[4796]: E1125 14:46:58.906451 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e\": container with ID starting with 48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e not found: ID does not exist" containerID="48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.906479 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e"} err="failed to get container status \"48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e\": rpc error: code = NotFound desc = could not find container \"48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e\": container with ID starting with 48680de207b9c7b9364c044680a2ff8df7fe895097a5dcd24944e12122c7015e not found: ID does not exist" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.928056 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-log-httpd\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.928151 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-scripts\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.928202 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.928303 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.928338 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-run-httpd\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.928368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z5j9\" (UniqueName: \"kubernetes.io/projected/b158f2af-c328-4d90-970d-f571396a745e-kube-api-access-4z5j9\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:58 crc kubenswrapper[4796]: I1125 14:46:58.928418 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-config-data\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.029843 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-log-httpd\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.029934 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-scripts\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.029995 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.030112 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.030144 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-run-httpd\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.030175 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z5j9\" (UniqueName: \"kubernetes.io/projected/b158f2af-c328-4d90-970d-f571396a745e-kube-api-access-4z5j9\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.030229 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-config-data\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.030917 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-log-httpd\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.031025 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-run-httpd\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.034078 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.034084 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-scripts\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.034919 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.035643 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-config-data\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.051117 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z5j9\" (UniqueName: \"kubernetes.io/projected/b158f2af-c328-4d90-970d-f571396a745e-kube-api-access-4z5j9\") pod \"ceilometer-0\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.184201 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.654815 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.660389 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 14:46:59 crc kubenswrapper[4796]: I1125 14:46:59.778903 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerStarted","Data":"6e1e2947f1cba38099fd1a109d5af29799464833d103c69d8f181ba862d93acc"} Nov 25 14:47:00 crc kubenswrapper[4796]: I1125 14:47:00.421873 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e" path="/var/lib/kubelet/pods/61d72fbe-34a2-4dfd-b1d0-a84510b6ec7e/volumes" Nov 25 14:47:00 crc kubenswrapper[4796]: I1125 14:47:00.790337 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerStarted","Data":"644da905ec125f174af2db392dc4f62b287575c9405b0326cdafdc5dee3e230b"} Nov 25 14:47:01 crc kubenswrapper[4796]: I1125 14:47:01.802926 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerStarted","Data":"110be5e352231142aecc072c01aa507238ac57bc5e98ea963f0ff2a7a49aa6bf"} Nov 25 14:47:02 crc kubenswrapper[4796]: I1125 14:47:02.301254 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:02 crc kubenswrapper[4796]: I1125 14:47:02.814138 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerStarted","Data":"fdd6af2a6e4b5ecce31d36fb6af1b63662f89e06ce8aaa2ebdd6e1d589e471ea"} Nov 25 14:47:03 crc kubenswrapper[4796]: I1125 14:47:03.034446 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:47:03 crc kubenswrapper[4796]: I1125 14:47:03.034748 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerName="glance-log" containerID="cri-o://5ceffb5ecf4222a555d788541c1f8317d254410c6f2305d650c243a46f998732" gracePeriod=30 Nov 25 14:47:03 crc kubenswrapper[4796]: I1125 14:47:03.034833 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerName="glance-httpd" containerID="cri-o://44833391bffa7612ed487dfbef1eb8472fa974a41fa7e889c651b5b28baa05e1" gracePeriod=30 Nov 25 14:47:03 crc kubenswrapper[4796]: I1125 14:47:03.823026 4796 generic.go:334] "Generic (PLEG): container finished" podID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerID="5ceffb5ecf4222a555d788541c1f8317d254410c6f2305d650c243a46f998732" exitCode=143 Nov 25 14:47:03 crc kubenswrapper[4796]: I1125 14:47:03.823184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33a4eb54-4ef7-4290-9030-d957632b40c0","Type":"ContainerDied","Data":"5ceffb5ecf4222a555d788541c1f8317d254410c6f2305d650c243a46f998732"} Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.105152 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.105393 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerName="glance-log" containerID="cri-o://5beff934f399a69659913eab987e79d284d65bbc196681ff4103d40c3afe3bda" gracePeriod=30 Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.105652 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerName="glance-httpd" containerID="cri-o://1bdccd89deee39bf25f7bd73ea512b28b91363c0fa66e8bcb52c96e2b52eb6ab" gracePeriod=30 Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.837800 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerStarted","Data":"2bdd0650bc8dd048fc93fedc6bad889744e1081101e542dad6967b3851aa6b24"} Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.838156 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.838004 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="sg-core" containerID="cri-o://fdd6af2a6e4b5ecce31d36fb6af1b63662f89e06ce8aaa2ebdd6e1d589e471ea" gracePeriod=30 Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.837966 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="proxy-httpd" containerID="cri-o://2bdd0650bc8dd048fc93fedc6bad889744e1081101e542dad6967b3851aa6b24" gracePeriod=30 Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.838040 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="ceilometer-notification-agent" containerID="cri-o://110be5e352231142aecc072c01aa507238ac57bc5e98ea963f0ff2a7a49aa6bf" gracePeriod=30 Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.838179 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="ceilometer-central-agent" containerID="cri-o://644da905ec125f174af2db392dc4f62b287575c9405b0326cdafdc5dee3e230b" gracePeriod=30 Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.843971 4796 generic.go:334] "Generic (PLEG): container finished" podID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerID="5beff934f399a69659913eab987e79d284d65bbc196681ff4103d40c3afe3bda" exitCode=143 Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.844015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"995ed35a-afd4-48d3-af01-e35145fdf1f0","Type":"ContainerDied","Data":"5beff934f399a69659913eab987e79d284d65bbc196681ff4103d40c3afe3bda"} Nov 25 14:47:04 crc kubenswrapper[4796]: I1125 14:47:04.871259 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.748993811 podStartE2EDuration="6.871241432s" podCreationTimestamp="2025-11-25 14:46:58 +0000 UTC" firstStartedPulling="2025-11-25 14:46:59.660185708 +0000 UTC m=+1348.003295132" lastFinishedPulling="2025-11-25 14:47:03.782433329 +0000 UTC m=+1352.125542753" observedRunningTime="2025-11-25 14:47:04.866233616 +0000 UTC m=+1353.209343050" watchObservedRunningTime="2025-11-25 14:47:04.871241432 +0000 UTC m=+1353.214350856" Nov 25 14:47:05 crc kubenswrapper[4796]: I1125 14:47:05.858188 4796 generic.go:334] "Generic (PLEG): container finished" podID="b158f2af-c328-4d90-970d-f571396a745e" containerID="2bdd0650bc8dd048fc93fedc6bad889744e1081101e542dad6967b3851aa6b24" exitCode=0 Nov 25 14:47:05 crc kubenswrapper[4796]: I1125 14:47:05.858433 4796 generic.go:334] "Generic (PLEG): container finished" podID="b158f2af-c328-4d90-970d-f571396a745e" containerID="fdd6af2a6e4b5ecce31d36fb6af1b63662f89e06ce8aaa2ebdd6e1d589e471ea" exitCode=2 Nov 25 14:47:05 crc kubenswrapper[4796]: I1125 14:47:05.858445 4796 generic.go:334] "Generic (PLEG): container finished" podID="b158f2af-c328-4d90-970d-f571396a745e" containerID="110be5e352231142aecc072c01aa507238ac57bc5e98ea963f0ff2a7a49aa6bf" exitCode=0 Nov 25 14:47:05 crc kubenswrapper[4796]: I1125 14:47:05.858400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerDied","Data":"2bdd0650bc8dd048fc93fedc6bad889744e1081101e542dad6967b3851aa6b24"} Nov 25 14:47:05 crc kubenswrapper[4796]: I1125 14:47:05.858481 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerDied","Data":"fdd6af2a6e4b5ecce31d36fb6af1b63662f89e06ce8aaa2ebdd6e1d589e471ea"} Nov 25 14:47:05 crc kubenswrapper[4796]: I1125 14:47:05.858495 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerDied","Data":"110be5e352231142aecc072c01aa507238ac57bc5e98ea963f0ff2a7a49aa6bf"} Nov 25 14:47:06 crc kubenswrapper[4796]: I1125 14:47:06.872636 4796 generic.go:334] "Generic (PLEG): container finished" podID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerID="44833391bffa7612ed487dfbef1eb8472fa974a41fa7e889c651b5b28baa05e1" exitCode=0 Nov 25 14:47:06 crc kubenswrapper[4796]: I1125 14:47:06.872944 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33a4eb54-4ef7-4290-9030-d957632b40c0","Type":"ContainerDied","Data":"44833391bffa7612ed487dfbef1eb8472fa974a41fa7e889c651b5b28baa05e1"} Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.636226 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.706101 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-httpd-run\") pod \"33a4eb54-4ef7-4290-9030-d957632b40c0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.706174 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-logs\") pod \"33a4eb54-4ef7-4290-9030-d957632b40c0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.706211 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-scripts\") pod \"33a4eb54-4ef7-4290-9030-d957632b40c0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.706252 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbzjv\" (UniqueName: \"kubernetes.io/projected/33a4eb54-4ef7-4290-9030-d957632b40c0-kube-api-access-dbzjv\") pod \"33a4eb54-4ef7-4290-9030-d957632b40c0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.706295 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-combined-ca-bundle\") pod \"33a4eb54-4ef7-4290-9030-d957632b40c0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.706331 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-config-data\") pod \"33a4eb54-4ef7-4290-9030-d957632b40c0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.706348 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"33a4eb54-4ef7-4290-9030-d957632b40c0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.706365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-public-tls-certs\") pod \"33a4eb54-4ef7-4290-9030-d957632b40c0\" (UID: \"33a4eb54-4ef7-4290-9030-d957632b40c0\") " Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.708083 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "33a4eb54-4ef7-4290-9030-d957632b40c0" (UID: "33a4eb54-4ef7-4290-9030-d957632b40c0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.708135 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-logs" (OuterVolumeSpecName: "logs") pod "33a4eb54-4ef7-4290-9030-d957632b40c0" (UID: "33a4eb54-4ef7-4290-9030-d957632b40c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.714930 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a4eb54-4ef7-4290-9030-d957632b40c0-kube-api-access-dbzjv" (OuterVolumeSpecName: "kube-api-access-dbzjv") pod "33a4eb54-4ef7-4290-9030-d957632b40c0" (UID: "33a4eb54-4ef7-4290-9030-d957632b40c0"). InnerVolumeSpecName "kube-api-access-dbzjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.721094 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "33a4eb54-4ef7-4290-9030-d957632b40c0" (UID: "33a4eb54-4ef7-4290-9030-d957632b40c0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.724761 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-scripts" (OuterVolumeSpecName: "scripts") pod "33a4eb54-4ef7-4290-9030-d957632b40c0" (UID: "33a4eb54-4ef7-4290-9030-d957632b40c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.769377 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-config-data" (OuterVolumeSpecName: "config-data") pod "33a4eb54-4ef7-4290-9030-d957632b40c0" (UID: "33a4eb54-4ef7-4290-9030-d957632b40c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.770263 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33a4eb54-4ef7-4290-9030-d957632b40c0" (UID: "33a4eb54-4ef7-4290-9030-d957632b40c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.785147 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "33a4eb54-4ef7-4290-9030-d957632b40c0" (UID: "33a4eb54-4ef7-4290-9030-d957632b40c0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.809387 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbzjv\" (UniqueName: \"kubernetes.io/projected/33a4eb54-4ef7-4290-9030-d957632b40c0-kube-api-access-dbzjv\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.809425 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.809438 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.809479 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.809517 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.809529 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.809541 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33a4eb54-4ef7-4290-9030-d957632b40c0-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.809550 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33a4eb54-4ef7-4290-9030-d957632b40c0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.830645 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.883737 4796 generic.go:334] "Generic (PLEG): container finished" podID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerID="1bdccd89deee39bf25f7bd73ea512b28b91363c0fa66e8bcb52c96e2b52eb6ab" exitCode=0 Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.883832 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"995ed35a-afd4-48d3-af01-e35145fdf1f0","Type":"ContainerDied","Data":"1bdccd89deee39bf25f7bd73ea512b28b91363c0fa66e8bcb52c96e2b52eb6ab"} Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.886835 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"33a4eb54-4ef7-4290-9030-d957632b40c0","Type":"ContainerDied","Data":"ef57ac5a3abb6a5c98f7993a739503b0942e30d189e9515f5d47d64e17183d86"} Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.886889 4796 scope.go:117] "RemoveContainer" containerID="44833391bffa7612ed487dfbef1eb8472fa974a41fa7e889c651b5b28baa05e1" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.886891 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.911430 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.921985 4796 scope.go:117] "RemoveContainer" containerID="5ceffb5ecf4222a555d788541c1f8317d254410c6f2305d650c243a46f998732" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.926264 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.934562 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.962567 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:47:07 crc kubenswrapper[4796]: E1125 14:47:07.962936 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerName="glance-log" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.962947 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerName="glance-log" Nov 25 14:47:07 crc kubenswrapper[4796]: E1125 14:47:07.962962 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerName="glance-httpd" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.962969 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerName="glance-httpd" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.963157 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerName="glance-log" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.963169 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a4eb54-4ef7-4290-9030-d957632b40c0" containerName="glance-httpd" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.964196 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.967181 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.967885 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 14:47:07 crc kubenswrapper[4796]: I1125 14:47:07.990674 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.115249 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.115680 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.115755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.115780 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf103ff-9a5b-408b-b69a-9383d471a83a-logs\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.115809 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764jd\" (UniqueName: \"kubernetes.io/projected/cbf103ff-9a5b-408b-b69a-9383d471a83a-kube-api-access-764jd\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.115842 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.115868 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf103ff-9a5b-408b-b69a-9383d471a83a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.115887 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.217349 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf103ff-9a5b-408b-b69a-9383d471a83a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.217386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.217446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.217489 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.217533 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.217556 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf103ff-9a5b-408b-b69a-9383d471a83a-logs\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.217599 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764jd\" (UniqueName: \"kubernetes.io/projected/cbf103ff-9a5b-408b-b69a-9383d471a83a-kube-api-access-764jd\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.217625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.218349 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.218528 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbf103ff-9a5b-408b-b69a-9383d471a83a-logs\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.218800 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbf103ff-9a5b-408b-b69a-9383d471a83a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.223537 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.225288 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.227478 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-config-data\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.227898 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf103ff-9a5b-408b-b69a-9383d471a83a-scripts\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.244982 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.248028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764jd\" (UniqueName: \"kubernetes.io/projected/cbf103ff-9a5b-408b-b69a-9383d471a83a-kube-api-access-764jd\") pod \"glance-default-external-api-0\" (UID: \"cbf103ff-9a5b-408b-b69a-9383d471a83a\") " pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.283476 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.426249 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a4eb54-4ef7-4290-9030-d957632b40c0" path="/var/lib/kubelet/pods/33a4eb54-4ef7-4290-9030-d957632b40c0/volumes" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.482013 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.627212 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-config-data\") pod \"995ed35a-afd4-48d3-af01-e35145fdf1f0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.627526 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"995ed35a-afd4-48d3-af01-e35145fdf1f0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.627667 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-combined-ca-bundle\") pod \"995ed35a-afd4-48d3-af01-e35145fdf1f0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.627896 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-internal-tls-certs\") pod \"995ed35a-afd4-48d3-af01-e35145fdf1f0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.627991 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tng4\" (UniqueName: \"kubernetes.io/projected/995ed35a-afd4-48d3-af01-e35145fdf1f0-kube-api-access-5tng4\") pod \"995ed35a-afd4-48d3-af01-e35145fdf1f0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.628111 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-httpd-run\") pod \"995ed35a-afd4-48d3-af01-e35145fdf1f0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.628235 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-logs\") pod \"995ed35a-afd4-48d3-af01-e35145fdf1f0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.628341 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-scripts\") pod \"995ed35a-afd4-48d3-af01-e35145fdf1f0\" (UID: \"995ed35a-afd4-48d3-af01-e35145fdf1f0\") " Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.630765 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "995ed35a-afd4-48d3-af01-e35145fdf1f0" (UID: "995ed35a-afd4-48d3-af01-e35145fdf1f0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.645784 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995ed35a-afd4-48d3-af01-e35145fdf1f0-kube-api-access-5tng4" (OuterVolumeSpecName: "kube-api-access-5tng4") pod "995ed35a-afd4-48d3-af01-e35145fdf1f0" (UID: "995ed35a-afd4-48d3-af01-e35145fdf1f0"). InnerVolumeSpecName "kube-api-access-5tng4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.646110 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-scripts" (OuterVolumeSpecName: "scripts") pod "995ed35a-afd4-48d3-af01-e35145fdf1f0" (UID: "995ed35a-afd4-48d3-af01-e35145fdf1f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.646460 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-logs" (OuterVolumeSpecName: "logs") pod "995ed35a-afd4-48d3-af01-e35145fdf1f0" (UID: "995ed35a-afd4-48d3-af01-e35145fdf1f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.679317 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "995ed35a-afd4-48d3-af01-e35145fdf1f0" (UID: "995ed35a-afd4-48d3-af01-e35145fdf1f0"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.724873 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "995ed35a-afd4-48d3-af01-e35145fdf1f0" (UID: "995ed35a-afd4-48d3-af01-e35145fdf1f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.731040 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.731083 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.731096 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.731106 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tng4\" (UniqueName: \"kubernetes.io/projected/995ed35a-afd4-48d3-af01-e35145fdf1f0-kube-api-access-5tng4\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.731116 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.731126 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/995ed35a-afd4-48d3-af01-e35145fdf1f0-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.778986 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.802676 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "995ed35a-afd4-48d3-af01-e35145fdf1f0" (UID: "995ed35a-afd4-48d3-af01-e35145fdf1f0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.805548 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-config-data" (OuterVolumeSpecName: "config-data") pod "995ed35a-afd4-48d3-af01-e35145fdf1f0" (UID: "995ed35a-afd4-48d3-af01-e35145fdf1f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.832814 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.832854 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.832864 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/995ed35a-afd4-48d3-af01-e35145fdf1f0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.898373 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"995ed35a-afd4-48d3-af01-e35145fdf1f0","Type":"ContainerDied","Data":"d66d599e056e67e686e5aa61d621a456b12d1a40d060c7b0154a83af20748556"} Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.898416 4796 scope.go:117] "RemoveContainer" containerID="1bdccd89deee39bf25f7bd73ea512b28b91363c0fa66e8bcb52c96e2b52eb6ab" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.898411 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.931161 4796 scope.go:117] "RemoveContainer" containerID="5beff934f399a69659913eab987e79d284d65bbc196681ff4103d40c3afe3bda" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.933999 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.944438 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.956078 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:47:08 crc kubenswrapper[4796]: E1125 14:47:08.958861 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerName="glance-httpd" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.958888 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerName="glance-httpd" Nov 25 14:47:08 crc kubenswrapper[4796]: E1125 14:47:08.958905 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerName="glance-log" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.958911 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerName="glance-log" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.959086 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerName="glance-log" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.959099 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="995ed35a-afd4-48d3-af01-e35145fdf1f0" containerName="glance-httpd" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.960098 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.962214 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.962376 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 14:47:08 crc kubenswrapper[4796]: I1125 14:47:08.968737 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.027320 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.035799 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/498b441d-79fc-4fa9-b857-72cf2f022ec9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.035839 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498b441d-79fc-4fa9-b857-72cf2f022ec9-logs\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.035859 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.035878 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.036032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.036214 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.036248 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.036294 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d27b\" (UniqueName: \"kubernetes.io/projected/498b441d-79fc-4fa9-b857-72cf2f022ec9-kube-api-access-9d27b\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.138783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.138854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.138889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d27b\" (UniqueName: \"kubernetes.io/projected/498b441d-79fc-4fa9-b857-72cf2f022ec9-kube-api-access-9d27b\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.138967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/498b441d-79fc-4fa9-b857-72cf2f022ec9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.139011 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498b441d-79fc-4fa9-b857-72cf2f022ec9-logs\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.139032 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.139079 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.139153 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.139628 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/498b441d-79fc-4fa9-b857-72cf2f022ec9-logs\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.139632 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.139665 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/498b441d-79fc-4fa9-b857-72cf2f022ec9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.143834 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.144054 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.144089 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.146470 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/498b441d-79fc-4fa9-b857-72cf2f022ec9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.158389 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d27b\" (UniqueName: \"kubernetes.io/projected/498b441d-79fc-4fa9-b857-72cf2f022ec9-kube-api-access-9d27b\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.175472 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"498b441d-79fc-4fa9-b857-72cf2f022ec9\") " pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.285881 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.906188 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.947991 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbf103ff-9a5b-408b-b69a-9383d471a83a","Type":"ContainerStarted","Data":"1ffaca2bf8e0e43ff5cbdf8575260c0754350cbf6ce3f9558770133252edd814"} Nov 25 14:47:09 crc kubenswrapper[4796]: I1125 14:47:09.948048 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbf103ff-9a5b-408b-b69a-9383d471a83a","Type":"ContainerStarted","Data":"ea51d35302bcc05485faac87a8876f75b010c181a587a3b71c2fa1f5b218466c"} Nov 25 14:47:10 crc kubenswrapper[4796]: I1125 14:47:10.419884 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="995ed35a-afd4-48d3-af01-e35145fdf1f0" path="/var/lib/kubelet/pods/995ed35a-afd4-48d3-af01-e35145fdf1f0/volumes" Nov 25 14:47:10 crc kubenswrapper[4796]: I1125 14:47:10.972110 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"498b441d-79fc-4fa9-b857-72cf2f022ec9","Type":"ContainerStarted","Data":"ef5c78ecbb4f799d1ecbf9fde644b492e71d485d842fb51342a8b4ceac7a7f23"} Nov 25 14:47:10 crc kubenswrapper[4796]: I1125 14:47:10.972458 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"498b441d-79fc-4fa9-b857-72cf2f022ec9","Type":"ContainerStarted","Data":"b512546fadff3c18b52f143f80f3d2bcaede71333ba373b45ca69f82fe85f368"} Nov 25 14:47:10 crc kubenswrapper[4796]: I1125 14:47:10.975706 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cbf103ff-9a5b-408b-b69a-9383d471a83a","Type":"ContainerStarted","Data":"22fe9ff13d037eb9b5edbe9b92834fead7d2d54af7f22b059203814f6dc1d81d"} Nov 25 14:47:10 crc kubenswrapper[4796]: I1125 14:47:10.998859 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.998842333 podStartE2EDuration="3.998842333s" podCreationTimestamp="2025-11-25 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:47:10.993551547 +0000 UTC m=+1359.336660981" watchObservedRunningTime="2025-11-25 14:47:10.998842333 +0000 UTC m=+1359.341951757" Nov 25 14:47:11 crc kubenswrapper[4796]: I1125 14:47:11.986898 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"498b441d-79fc-4fa9-b857-72cf2f022ec9","Type":"ContainerStarted","Data":"4b7ecc47bf9a095c3c7a21b40c86458094e883cd2eb78bc34a8dbfa91ba83d73"} Nov 25 14:47:11 crc kubenswrapper[4796]: I1125 14:47:11.989905 4796 generic.go:334] "Generic (PLEG): container finished" podID="b158f2af-c328-4d90-970d-f571396a745e" containerID="644da905ec125f174af2db392dc4f62b287575c9405b0326cdafdc5dee3e230b" exitCode=0 Nov 25 14:47:11 crc kubenswrapper[4796]: I1125 14:47:11.990763 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerDied","Data":"644da905ec125f174af2db392dc4f62b287575c9405b0326cdafdc5dee3e230b"} Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.009407 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.00938914 podStartE2EDuration="4.00938914s" podCreationTimestamp="2025-11-25 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:47:12.007151169 +0000 UTC m=+1360.350260593" watchObservedRunningTime="2025-11-25 14:47:12.00938914 +0000 UTC m=+1360.352498564" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.340922 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.497068 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-log-httpd\") pod \"b158f2af-c328-4d90-970d-f571396a745e\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.497210 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-combined-ca-bundle\") pod \"b158f2af-c328-4d90-970d-f571396a745e\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.497238 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-scripts\") pod \"b158f2af-c328-4d90-970d-f571396a745e\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.497308 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-run-httpd\") pod \"b158f2af-c328-4d90-970d-f571396a745e\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.497377 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-config-data\") pod \"b158f2af-c328-4d90-970d-f571396a745e\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.497409 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-sg-core-conf-yaml\") pod \"b158f2af-c328-4d90-970d-f571396a745e\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.497424 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z5j9\" (UniqueName: \"kubernetes.io/projected/b158f2af-c328-4d90-970d-f571396a745e-kube-api-access-4z5j9\") pod \"b158f2af-c328-4d90-970d-f571396a745e\" (UID: \"b158f2af-c328-4d90-970d-f571396a745e\") " Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.497591 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b158f2af-c328-4d90-970d-f571396a745e" (UID: "b158f2af-c328-4d90-970d-f571396a745e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.497697 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b158f2af-c328-4d90-970d-f571396a745e" (UID: "b158f2af-c328-4d90-970d-f571396a745e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.498072 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.498089 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b158f2af-c328-4d90-970d-f571396a745e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.503279 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-scripts" (OuterVolumeSpecName: "scripts") pod "b158f2af-c328-4d90-970d-f571396a745e" (UID: "b158f2af-c328-4d90-970d-f571396a745e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.509716 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b158f2af-c328-4d90-970d-f571396a745e-kube-api-access-4z5j9" (OuterVolumeSpecName: "kube-api-access-4z5j9") pod "b158f2af-c328-4d90-970d-f571396a745e" (UID: "b158f2af-c328-4d90-970d-f571396a745e"). InnerVolumeSpecName "kube-api-access-4z5j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.541354 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b158f2af-c328-4d90-970d-f571396a745e" (UID: "b158f2af-c328-4d90-970d-f571396a745e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.582338 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b158f2af-c328-4d90-970d-f571396a745e" (UID: "b158f2af-c328-4d90-970d-f571396a745e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.599651 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.599682 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z5j9\" (UniqueName: \"kubernetes.io/projected/b158f2af-c328-4d90-970d-f571396a745e-kube-api-access-4z5j9\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.599693 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.599701 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.632338 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-config-data" (OuterVolumeSpecName: "config-data") pod "b158f2af-c328-4d90-970d-f571396a745e" (UID: "b158f2af-c328-4d90-970d-f571396a745e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:12 crc kubenswrapper[4796]: I1125 14:47:12.702125 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b158f2af-c328-4d90-970d-f571396a745e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.011849 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b158f2af-c328-4d90-970d-f571396a745e","Type":"ContainerDied","Data":"6e1e2947f1cba38099fd1a109d5af29799464833d103c69d8f181ba862d93acc"} Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.011889 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.011918 4796 scope.go:117] "RemoveContainer" containerID="2bdd0650bc8dd048fc93fedc6bad889744e1081101e542dad6967b3851aa6b24" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.059931 4796 scope.go:117] "RemoveContainer" containerID="fdd6af2a6e4b5ecce31d36fb6af1b63662f89e06ce8aaa2ebdd6e1d589e471ea" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.063886 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.074304 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.079797 4796 scope.go:117] "RemoveContainer" containerID="110be5e352231142aecc072c01aa507238ac57bc5e98ea963f0ff2a7a49aa6bf" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.110851 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:13 crc kubenswrapper[4796]: E1125 14:47:13.111273 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="sg-core" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.111295 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="sg-core" Nov 25 14:47:13 crc kubenswrapper[4796]: E1125 14:47:13.111313 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="ceilometer-notification-agent" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.111323 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="ceilometer-notification-agent" Nov 25 14:47:13 crc kubenswrapper[4796]: E1125 14:47:13.111349 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="ceilometer-central-agent" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.111357 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="ceilometer-central-agent" Nov 25 14:47:13 crc kubenswrapper[4796]: E1125 14:47:13.111388 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="proxy-httpd" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.111396 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="proxy-httpd" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.111644 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="sg-core" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.111678 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="ceilometer-notification-agent" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.111689 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="proxy-httpd" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.111708 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b158f2af-c328-4d90-970d-f571396a745e" containerName="ceilometer-central-agent" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.113861 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.116987 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.117394 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.128054 4796 scope.go:117] "RemoveContainer" containerID="644da905ec125f174af2db392dc4f62b287575c9405b0326cdafdc5dee3e230b" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.128881 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.210763 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl8rt\" (UniqueName: \"kubernetes.io/projected/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-kube-api-access-wl8rt\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.210894 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.210927 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-run-httpd\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.210946 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.210991 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-scripts\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.211020 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-log-httpd\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.211100 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-config-data\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.312849 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl8rt\" (UniqueName: \"kubernetes.io/projected/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-kube-api-access-wl8rt\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.312931 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.312964 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-run-httpd\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.312994 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.313029 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-scripts\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.313056 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-log-httpd\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.313587 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-config-data\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.314081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-log-httpd\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.314209 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-run-httpd\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.316852 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-scripts\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.317791 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.319336 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.322104 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-config-data\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.331976 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl8rt\" (UniqueName: \"kubernetes.io/projected/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-kube-api-access-wl8rt\") pod \"ceilometer-0\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.438241 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:47:13 crc kubenswrapper[4796]: I1125 14:47:13.940248 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:13 crc kubenswrapper[4796]: W1125 14:47:13.961524 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1da4cd_1c62_4098_b3b0_9c38bdae7377.slice/crio-a278cc2d93358077a2017fa81f21403a29ea1a7589d11474de70e92cff10d43e WatchSource:0}: Error finding container a278cc2d93358077a2017fa81f21403a29ea1a7589d11474de70e92cff10d43e: Status 404 returned error can't find the container with id a278cc2d93358077a2017fa81f21403a29ea1a7589d11474de70e92cff10d43e Nov 25 14:47:14 crc kubenswrapper[4796]: I1125 14:47:14.036248 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerStarted","Data":"a278cc2d93358077a2017fa81f21403a29ea1a7589d11474de70e92cff10d43e"} Nov 25 14:47:14 crc kubenswrapper[4796]: I1125 14:47:14.041386 4796 generic.go:334] "Generic (PLEG): container finished" podID="33e7984e-9b94-436b-90f4-82e5253ac471" containerID="2b6c58762b86fc60c47b6ebb02dead1149ab2e31713bab09b35bd04c117eefea" exitCode=0 Nov 25 14:47:14 crc kubenswrapper[4796]: I1125 14:47:14.041420 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kpnjm" event={"ID":"33e7984e-9b94-436b-90f4-82e5253ac471","Type":"ContainerDied","Data":"2b6c58762b86fc60c47b6ebb02dead1149ab2e31713bab09b35bd04c117eefea"} Nov 25 14:47:14 crc kubenswrapper[4796]: I1125 14:47:14.425069 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b158f2af-c328-4d90-970d-f571396a745e" path="/var/lib/kubelet/pods/b158f2af-c328-4d90-970d-f571396a745e/volumes" Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.053476 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerStarted","Data":"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379"} Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.445623 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.555795 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r26sc\" (UniqueName: \"kubernetes.io/projected/33e7984e-9b94-436b-90f4-82e5253ac471-kube-api-access-r26sc\") pod \"33e7984e-9b94-436b-90f4-82e5253ac471\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.556241 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-scripts\") pod \"33e7984e-9b94-436b-90f4-82e5253ac471\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.556327 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-combined-ca-bundle\") pod \"33e7984e-9b94-436b-90f4-82e5253ac471\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.556372 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-config-data\") pod \"33e7984e-9b94-436b-90f4-82e5253ac471\" (UID: \"33e7984e-9b94-436b-90f4-82e5253ac471\") " Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.564279 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-scripts" (OuterVolumeSpecName: "scripts") pod "33e7984e-9b94-436b-90f4-82e5253ac471" (UID: "33e7984e-9b94-436b-90f4-82e5253ac471"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.565566 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e7984e-9b94-436b-90f4-82e5253ac471-kube-api-access-r26sc" (OuterVolumeSpecName: "kube-api-access-r26sc") pod "33e7984e-9b94-436b-90f4-82e5253ac471" (UID: "33e7984e-9b94-436b-90f4-82e5253ac471"). InnerVolumeSpecName "kube-api-access-r26sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.593595 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-config-data" (OuterVolumeSpecName: "config-data") pod "33e7984e-9b94-436b-90f4-82e5253ac471" (UID: "33e7984e-9b94-436b-90f4-82e5253ac471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.593688 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33e7984e-9b94-436b-90f4-82e5253ac471" (UID: "33e7984e-9b94-436b-90f4-82e5253ac471"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.664065 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r26sc\" (UniqueName: \"kubernetes.io/projected/33e7984e-9b94-436b-90f4-82e5253ac471-kube-api-access-r26sc\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.664110 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.664127 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:15 crc kubenswrapper[4796]: I1125 14:47:15.664142 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e7984e-9b94-436b-90f4-82e5253ac471-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.063303 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kpnjm" event={"ID":"33e7984e-9b94-436b-90f4-82e5253ac471","Type":"ContainerDied","Data":"ebf0702ffc50d8212b185b534a6531163e8d791318c40b8c061d10c7c32dfe24"} Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.064389 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf0702ffc50d8212b185b534a6531163e8d791318c40b8c061d10c7c32dfe24" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.063355 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kpnjm" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.065075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerStarted","Data":"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8"} Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.158957 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 14:47:16 crc kubenswrapper[4796]: E1125 14:47:16.159419 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e7984e-9b94-436b-90f4-82e5253ac471" containerName="nova-cell0-conductor-db-sync" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.159441 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e7984e-9b94-436b-90f4-82e5253ac471" containerName="nova-cell0-conductor-db-sync" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.159666 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e7984e-9b94-436b-90f4-82e5253ac471" containerName="nova-cell0-conductor-db-sync" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.160360 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.162993 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.163404 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rjcnz" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.180189 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.273792 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.274113 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6jb\" (UniqueName: \"kubernetes.io/projected/596b66cb-f7ca-4a23-969a-81168ad1d8b1-kube-api-access-xn6jb\") pod \"nova-cell0-conductor-0\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.274296 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.376415 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.376677 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6jb\" (UniqueName: \"kubernetes.io/projected/596b66cb-f7ca-4a23-969a-81168ad1d8b1-kube-api-access-xn6jb\") pod \"nova-cell0-conductor-0\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.376787 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.382053 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.382190 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.393120 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6jb\" (UniqueName: \"kubernetes.io/projected/596b66cb-f7ca-4a23-969a-81168ad1d8b1-kube-api-access-xn6jb\") pod \"nova-cell0-conductor-0\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:16 crc kubenswrapper[4796]: I1125 14:47:16.476072 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:17 crc kubenswrapper[4796]: I1125 14:47:17.076973 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerStarted","Data":"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7"} Nov 25 14:47:17 crc kubenswrapper[4796]: I1125 14:47:17.240807 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 14:47:18 crc kubenswrapper[4796]: I1125 14:47:18.099663 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"596b66cb-f7ca-4a23-969a-81168ad1d8b1","Type":"ContainerStarted","Data":"3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e"} Nov 25 14:47:18 crc kubenswrapper[4796]: I1125 14:47:18.100065 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"596b66cb-f7ca-4a23-969a-81168ad1d8b1","Type":"ContainerStarted","Data":"dcd2c608ea7ce4e72ea20dd0dd9082db1101c5b23570ee77b68289fc31f151ad"} Nov 25 14:47:18 crc kubenswrapper[4796]: I1125 14:47:18.100116 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:18 crc kubenswrapper[4796]: I1125 14:47:18.130879 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.130860679 podStartE2EDuration="2.130860679s" podCreationTimestamp="2025-11-25 14:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:47:18.121041362 +0000 UTC m=+1366.464150786" watchObservedRunningTime="2025-11-25 14:47:18.130860679 +0000 UTC m=+1366.473970103" Nov 25 14:47:18 crc kubenswrapper[4796]: I1125 14:47:18.284477 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 14:47:18 crc kubenswrapper[4796]: I1125 14:47:18.284956 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 14:47:18 crc kubenswrapper[4796]: I1125 14:47:18.327128 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 14:47:18 crc kubenswrapper[4796]: I1125 14:47:18.333674 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.112185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerStarted","Data":"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d"} Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.113020 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.113057 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.138016 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.150876454 podStartE2EDuration="6.137990689s" podCreationTimestamp="2025-11-25 14:47:13 +0000 UTC" firstStartedPulling="2025-11-25 14:47:13.96491396 +0000 UTC m=+1362.308023414" lastFinishedPulling="2025-11-25 14:47:17.952028205 +0000 UTC m=+1366.295137649" observedRunningTime="2025-11-25 14:47:19.13099656 +0000 UTC m=+1367.474105984" watchObservedRunningTime="2025-11-25 14:47:19.137990689 +0000 UTC m=+1367.481100113" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.291962 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.292026 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.334693 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.349429 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.468871 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.514428 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.514502 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.514561 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.515484 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b89880c276465411a1df30a0fcd1ff1a63ffefdebe8f12fb134cee10c6604130"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:47:19 crc kubenswrapper[4796]: I1125 14:47:19.515561 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://b89880c276465411a1df30a0fcd1ff1a63ffefdebe8f12fb134cee10c6604130" gracePeriod=600 Nov 25 14:47:20 crc kubenswrapper[4796]: I1125 14:47:20.124450 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="b89880c276465411a1df30a0fcd1ff1a63ffefdebe8f12fb134cee10c6604130" exitCode=0 Nov 25 14:47:20 crc kubenswrapper[4796]: I1125 14:47:20.124991 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerName="nova-cell0-conductor-conductor" containerID="cri-o://3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" gracePeriod=30 Nov 25 14:47:20 crc kubenswrapper[4796]: I1125 14:47:20.124529 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"b89880c276465411a1df30a0fcd1ff1a63ffefdebe8f12fb134cee10c6604130"} Nov 25 14:47:20 crc kubenswrapper[4796]: I1125 14:47:20.125105 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7"} Nov 25 14:47:20 crc kubenswrapper[4796]: I1125 14:47:20.125134 4796 scope.go:117] "RemoveContainer" containerID="8401d6a31e01e755468ba5162268a2636f0971d1abaa35f12c5126e3ee6beb3c" Nov 25 14:47:20 crc kubenswrapper[4796]: I1125 14:47:20.126353 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:20 crc kubenswrapper[4796]: I1125 14:47:20.126525 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 14:47:20 crc kubenswrapper[4796]: I1125 14:47:20.126648 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:21 crc kubenswrapper[4796]: I1125 14:47:21.368945 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 14:47:21 crc kubenswrapper[4796]: I1125 14:47:21.369409 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:47:21 crc kubenswrapper[4796]: I1125 14:47:21.393939 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 14:47:22 crc kubenswrapper[4796]: I1125 14:47:22.148031 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:47:22 crc kubenswrapper[4796]: I1125 14:47:22.148297 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:47:22 crc kubenswrapper[4796]: I1125 14:47:22.262719 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:22 crc kubenswrapper[4796]: I1125 14:47:22.263397 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="ceilometer-central-agent" containerID="cri-o://2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379" gracePeriod=30 Nov 25 14:47:22 crc kubenswrapper[4796]: I1125 14:47:22.263520 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="ceilometer-notification-agent" containerID="cri-o://3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8" gracePeriod=30 Nov 25 14:47:22 crc kubenswrapper[4796]: I1125 14:47:22.263444 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="proxy-httpd" containerID="cri-o://853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d" gracePeriod=30 Nov 25 14:47:22 crc kubenswrapper[4796]: I1125 14:47:22.263426 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="sg-core" containerID="cri-o://4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7" gracePeriod=30 Nov 25 14:47:22 crc kubenswrapper[4796]: I1125 14:47:22.640411 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:22 crc kubenswrapper[4796]: I1125 14:47:22.995522 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.076155 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.176765 4796 generic.go:334] "Generic (PLEG): container finished" podID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerID="853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d" exitCode=0 Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.176798 4796 generic.go:334] "Generic (PLEG): container finished" podID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerID="4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7" exitCode=2 Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.176808 4796 generic.go:334] "Generic (PLEG): container finished" podID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerID="3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8" exitCode=0 Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.176816 4796 generic.go:334] "Generic (PLEG): container finished" podID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerID="2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379" exitCode=0 Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.177796 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerDied","Data":"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d"} Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.177832 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerDied","Data":"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7"} Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.177845 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerDied","Data":"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8"} Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.177856 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerDied","Data":"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379"} Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.177866 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff1da4cd-1c62-4098-b3b0-9c38bdae7377","Type":"ContainerDied","Data":"a278cc2d93358077a2017fa81f21403a29ea1a7589d11474de70e92cff10d43e"} Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.177869 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.177884 4796 scope.go:117] "RemoveContainer" containerID="853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.205410 4796 scope.go:117] "RemoveContainer" containerID="4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.222599 4796 scope.go:117] "RemoveContainer" containerID="3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.230042 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-scripts\") pod \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.230118 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-config-data\") pod \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.230181 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl8rt\" (UniqueName: \"kubernetes.io/projected/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-kube-api-access-wl8rt\") pod \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.230234 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-sg-core-conf-yaml\") pod \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.230269 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-run-httpd\") pod \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.230294 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-combined-ca-bundle\") pod \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.230422 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-log-httpd\") pod \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\" (UID: \"ff1da4cd-1c62-4098-b3b0-9c38bdae7377\") " Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.231889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff1da4cd-1c62-4098-b3b0-9c38bdae7377" (UID: "ff1da4cd-1c62-4098-b3b0-9c38bdae7377"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.232952 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff1da4cd-1c62-4098-b3b0-9c38bdae7377" (UID: "ff1da4cd-1c62-4098-b3b0-9c38bdae7377"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.236827 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-scripts" (OuterVolumeSpecName: "scripts") pod "ff1da4cd-1c62-4098-b3b0-9c38bdae7377" (UID: "ff1da4cd-1c62-4098-b3b0-9c38bdae7377"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.240034 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-kube-api-access-wl8rt" (OuterVolumeSpecName: "kube-api-access-wl8rt") pod "ff1da4cd-1c62-4098-b3b0-9c38bdae7377" (UID: "ff1da4cd-1c62-4098-b3b0-9c38bdae7377"). InnerVolumeSpecName "kube-api-access-wl8rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.260959 4796 scope.go:117] "RemoveContainer" containerID="2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.267198 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff1da4cd-1c62-4098-b3b0-9c38bdae7377" (UID: "ff1da4cd-1c62-4098-b3b0-9c38bdae7377"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.283244 4796 scope.go:117] "RemoveContainer" containerID="853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d" Nov 25 14:47:23 crc kubenswrapper[4796]: E1125 14:47:23.290693 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d\": container with ID starting with 853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d not found: ID does not exist" containerID="853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.290745 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d"} err="failed to get container status \"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d\": rpc error: code = NotFound desc = could not find container \"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d\": container with ID starting with 853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.290771 4796 scope.go:117] "RemoveContainer" containerID="4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7" Nov 25 14:47:23 crc kubenswrapper[4796]: E1125 14:47:23.291104 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7\": container with ID starting with 4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7 not found: ID does not exist" containerID="4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.291148 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7"} err="failed to get container status \"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7\": rpc error: code = NotFound desc = could not find container \"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7\": container with ID starting with 4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.291178 4796 scope.go:117] "RemoveContainer" containerID="3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8" Nov 25 14:47:23 crc kubenswrapper[4796]: E1125 14:47:23.291403 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8\": container with ID starting with 3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8 not found: ID does not exist" containerID="3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.291424 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8"} err="failed to get container status \"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8\": rpc error: code = NotFound desc = could not find container \"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8\": container with ID starting with 3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.291436 4796 scope.go:117] "RemoveContainer" containerID="2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379" Nov 25 14:47:23 crc kubenswrapper[4796]: E1125 14:47:23.291874 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379\": container with ID starting with 2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379 not found: ID does not exist" containerID="2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.292326 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379"} err="failed to get container status \"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379\": rpc error: code = NotFound desc = could not find container \"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379\": container with ID starting with 2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.292410 4796 scope.go:117] "RemoveContainer" containerID="853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.292951 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d"} err="failed to get container status \"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d\": rpc error: code = NotFound desc = could not find container \"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d\": container with ID starting with 853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.292984 4796 scope.go:117] "RemoveContainer" containerID="4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.293283 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7"} err="failed to get container status \"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7\": rpc error: code = NotFound desc = could not find container \"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7\": container with ID starting with 4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.293320 4796 scope.go:117] "RemoveContainer" containerID="3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.293596 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8"} err="failed to get container status \"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8\": rpc error: code = NotFound desc = could not find container \"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8\": container with ID starting with 3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.293628 4796 scope.go:117] "RemoveContainer" containerID="2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.294494 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379"} err="failed to get container status \"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379\": rpc error: code = NotFound desc = could not find container \"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379\": container with ID starting with 2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.294579 4796 scope.go:117] "RemoveContainer" containerID="853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.295036 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d"} err="failed to get container status \"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d\": rpc error: code = NotFound desc = could not find container \"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d\": container with ID starting with 853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.295120 4796 scope.go:117] "RemoveContainer" containerID="4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.295468 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7"} err="failed to get container status \"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7\": rpc error: code = NotFound desc = could not find container \"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7\": container with ID starting with 4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.295500 4796 scope.go:117] "RemoveContainer" containerID="3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.296453 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8"} err="failed to get container status \"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8\": rpc error: code = NotFound desc = could not find container \"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8\": container with ID starting with 3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.296481 4796 scope.go:117] "RemoveContainer" containerID="2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.297072 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379"} err="failed to get container status \"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379\": rpc error: code = NotFound desc = could not find container \"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379\": container with ID starting with 2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.297107 4796 scope.go:117] "RemoveContainer" containerID="853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.297285 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d"} err="failed to get container status \"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d\": rpc error: code = NotFound desc = could not find container \"853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d\": container with ID starting with 853d3d9f0cd5734b5f6a89ee4e99007abd7b1ff4f773f17771536e934616ed0d not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.297314 4796 scope.go:117] "RemoveContainer" containerID="4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.297552 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7"} err="failed to get container status \"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7\": rpc error: code = NotFound desc = could not find container \"4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7\": container with ID starting with 4ad72e0fe1cd311eb4184e1a70c688083543643457bc417738f4c5c8e869e3e7 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.297592 4796 scope.go:117] "RemoveContainer" containerID="3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.299267 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8"} err="failed to get container status \"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8\": rpc error: code = NotFound desc = could not find container \"3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8\": container with ID starting with 3f5e889061f30d8877846fb97cd2fd6b313c18d9be8d963c79f2485a4e07a8a8 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.299309 4796 scope.go:117] "RemoveContainer" containerID="2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.300041 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379"} err="failed to get container status \"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379\": rpc error: code = NotFound desc = could not find container \"2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379\": container with ID starting with 2c34800ae7604e6b2f58bd5de13d33c978593cfa2d3f708f6c492ef561640379 not found: ID does not exist" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.333592 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl8rt\" (UniqueName: \"kubernetes.io/projected/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-kube-api-access-wl8rt\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.333639 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.333653 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.333664 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.333678 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.342697 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff1da4cd-1c62-4098-b3b0-9c38bdae7377" (UID: "ff1da4cd-1c62-4098-b3b0-9c38bdae7377"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.362779 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-config-data" (OuterVolumeSpecName: "config-data") pod "ff1da4cd-1c62-4098-b3b0-9c38bdae7377" (UID: "ff1da4cd-1c62-4098-b3b0-9c38bdae7377"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.435913 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.435958 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1da4cd-1c62-4098-b3b0-9c38bdae7377-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.528074 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.539759 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.561460 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:23 crc kubenswrapper[4796]: E1125 14:47:23.562066 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="proxy-httpd" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.562179 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="proxy-httpd" Nov 25 14:47:23 crc kubenswrapper[4796]: E1125 14:47:23.562249 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="ceilometer-notification-agent" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.562300 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="ceilometer-notification-agent" Nov 25 14:47:23 crc kubenswrapper[4796]: E1125 14:47:23.562378 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="ceilometer-central-agent" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.562429 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="ceilometer-central-agent" Nov 25 14:47:23 crc kubenswrapper[4796]: E1125 14:47:23.562506 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="sg-core" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.562594 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="sg-core" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.562834 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="proxy-httpd" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.562908 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="ceilometer-notification-agent" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.562972 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="ceilometer-central-agent" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.563032 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" containerName="sg-core" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.568312 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.574000 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.575477 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.588003 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.756154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-config-data\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.756563 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.756608 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-scripts\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.756823 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.756998 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-run-httpd\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.757134 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-log-httpd\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.757306 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k2pn\" (UniqueName: \"kubernetes.io/projected/9b8e40f8-47ff-49ed-abde-0ed532f677b7-kube-api-access-6k2pn\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.858735 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k2pn\" (UniqueName: \"kubernetes.io/projected/9b8e40f8-47ff-49ed-abde-0ed532f677b7-kube-api-access-6k2pn\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.858825 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-config-data\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.858895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.858918 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-scripts\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.858987 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.859032 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-run-httpd\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.859080 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-log-httpd\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.859814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-log-httpd\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.860709 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-run-httpd\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.865436 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.867436 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-scripts\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.868671 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-config-data\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.873133 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.887752 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k2pn\" (UniqueName: \"kubernetes.io/projected/9b8e40f8-47ff-49ed-abde-0ed532f677b7-kube-api-access-6k2pn\") pod \"ceilometer-0\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " pod="openstack/ceilometer-0" Nov 25 14:47:23 crc kubenswrapper[4796]: I1125 14:47:23.889176 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:47:24 crc kubenswrapper[4796]: I1125 14:47:24.444315 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1da4cd-1c62-4098-b3b0-9c38bdae7377" path="/var/lib/kubelet/pods/ff1da4cd-1c62-4098-b3b0-9c38bdae7377/volumes" Nov 25 14:47:24 crc kubenswrapper[4796]: I1125 14:47:24.445643 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:25 crc kubenswrapper[4796]: I1125 14:47:25.200181 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerStarted","Data":"daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe"} Nov 25 14:47:25 crc kubenswrapper[4796]: I1125 14:47:25.201018 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerStarted","Data":"4ff138b8bc671655d02010b3f6fddf2f1ff2272f8b79eb47aa8d9e5a95eb1cfa"} Nov 25 14:47:26 crc kubenswrapper[4796]: I1125 14:47:26.212132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerStarted","Data":"3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd"} Nov 25 14:47:26 crc kubenswrapper[4796]: E1125 14:47:26.478606 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:26 crc kubenswrapper[4796]: E1125 14:47:26.480227 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:26 crc kubenswrapper[4796]: E1125 14:47:26.481855 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:26 crc kubenswrapper[4796]: E1125 14:47:26.481896 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerName="nova-cell0-conductor-conductor" Nov 25 14:47:28 crc kubenswrapper[4796]: I1125 14:47:28.247525 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerStarted","Data":"e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819"} Nov 25 14:47:29 crc kubenswrapper[4796]: I1125 14:47:29.262667 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerStarted","Data":"109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1"} Nov 25 14:47:29 crc kubenswrapper[4796]: I1125 14:47:29.265175 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 14:47:29 crc kubenswrapper[4796]: I1125 14:47:29.303377 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.324890767 podStartE2EDuration="6.303351831s" podCreationTimestamp="2025-11-25 14:47:23 +0000 UTC" firstStartedPulling="2025-11-25 14:47:24.424615306 +0000 UTC m=+1372.767724730" lastFinishedPulling="2025-11-25 14:47:28.40307633 +0000 UTC m=+1376.746185794" observedRunningTime="2025-11-25 14:47:29.290939103 +0000 UTC m=+1377.634048547" watchObservedRunningTime="2025-11-25 14:47:29.303351831 +0000 UTC m=+1377.646461255" Nov 25 14:47:31 crc kubenswrapper[4796]: E1125 14:47:31.480011 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:31 crc kubenswrapper[4796]: E1125 14:47:31.482157 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:31 crc kubenswrapper[4796]: E1125 14:47:31.484181 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:31 crc kubenswrapper[4796]: E1125 14:47:31.484223 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerName="nova-cell0-conductor-conductor" Nov 25 14:47:36 crc kubenswrapper[4796]: E1125 14:47:36.478733 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:36 crc kubenswrapper[4796]: E1125 14:47:36.481185 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:36 crc kubenswrapper[4796]: E1125 14:47:36.482785 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:36 crc kubenswrapper[4796]: E1125 14:47:36.482866 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerName="nova-cell0-conductor-conductor" Nov 25 14:47:41 crc kubenswrapper[4796]: E1125 14:47:41.478194 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:41 crc kubenswrapper[4796]: E1125 14:47:41.480284 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:41 crc kubenswrapper[4796]: E1125 14:47:41.481736 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:41 crc kubenswrapper[4796]: E1125 14:47:41.481814 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerName="nova-cell0-conductor-conductor" Nov 25 14:47:46 crc kubenswrapper[4796]: E1125 14:47:46.479157 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:46 crc kubenswrapper[4796]: E1125 14:47:46.483374 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:46 crc kubenswrapper[4796]: E1125 14:47:46.485833 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 25 14:47:46 crc kubenswrapper[4796]: E1125 14:47:46.485980 4796 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerName="nova-cell0-conductor-conductor" Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.472046 4796 generic.go:334] "Generic (PLEG): container finished" podID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" exitCode=137 Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.472155 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"596b66cb-f7ca-4a23-969a-81168ad1d8b1","Type":"ContainerDied","Data":"3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e"} Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.565863 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.598880 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-config-data\") pod \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.603937 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6jb\" (UniqueName: \"kubernetes.io/projected/596b66cb-f7ca-4a23-969a-81168ad1d8b1-kube-api-access-xn6jb\") pod \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.604641 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-combined-ca-bundle\") pod \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.610169 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/596b66cb-f7ca-4a23-969a-81168ad1d8b1-kube-api-access-xn6jb" (OuterVolumeSpecName: "kube-api-access-xn6jb") pod "596b66cb-f7ca-4a23-969a-81168ad1d8b1" (UID: "596b66cb-f7ca-4a23-969a-81168ad1d8b1"). InnerVolumeSpecName "kube-api-access-xn6jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:47:50 crc kubenswrapper[4796]: E1125 14:47:50.626891 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-combined-ca-bundle podName:596b66cb-f7ca-4a23-969a-81168ad1d8b1 nodeName:}" failed. No retries permitted until 2025-11-25 14:47:51.126862146 +0000 UTC m=+1399.469971570 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-combined-ca-bundle") pod "596b66cb-f7ca-4a23-969a-81168ad1d8b1" (UID: "596b66cb-f7ca-4a23-969a-81168ad1d8b1") : error deleting /var/lib/kubelet/pods/596b66cb-f7ca-4a23-969a-81168ad1d8b1/volume-subpaths: remove /var/lib/kubelet/pods/596b66cb-f7ca-4a23-969a-81168ad1d8b1/volume-subpaths: no such file or directory Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.629151 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-config-data" (OuterVolumeSpecName: "config-data") pod "596b66cb-f7ca-4a23-969a-81168ad1d8b1" (UID: "596b66cb-f7ca-4a23-969a-81168ad1d8b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.707551 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:50 crc kubenswrapper[4796]: I1125 14:47:50.707800 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn6jb\" (UniqueName: \"kubernetes.io/projected/596b66cb-f7ca-4a23-969a-81168ad1d8b1-kube-api-access-xn6jb\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.216554 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-combined-ca-bundle\") pod \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\" (UID: \"596b66cb-f7ca-4a23-969a-81168ad1d8b1\") " Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.221495 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "596b66cb-f7ca-4a23-969a-81168ad1d8b1" (UID: "596b66cb-f7ca-4a23-969a-81168ad1d8b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.318824 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596b66cb-f7ca-4a23-969a-81168ad1d8b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.510667 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"596b66cb-f7ca-4a23-969a-81168ad1d8b1","Type":"ContainerDied","Data":"dcd2c608ea7ce4e72ea20dd0dd9082db1101c5b23570ee77b68289fc31f151ad"} Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.510739 4796 scope.go:117] "RemoveContainer" containerID="3ce36f68c21b40b688738ec51a65a6738ab3583722d6a58fae89bc81d94d0c5e" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.510753 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.545537 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.556176 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.573853 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 14:47:51 crc kubenswrapper[4796]: E1125 14:47:51.574492 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerName="nova-cell0-conductor-conductor" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.574515 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerName="nova-cell0-conductor-conductor" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.574917 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" containerName="nova-cell0-conductor-conductor" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.575917 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.578653 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.578817 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rjcnz" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.582612 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.624373 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5bdd76-c116-469f-84a1-c869e4ffb5ce-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3d5bdd76-c116-469f-84a1-c869e4ffb5ce\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.624427 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5bdd76-c116-469f-84a1-c869e4ffb5ce-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3d5bdd76-c116-469f-84a1-c869e4ffb5ce\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.624478 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bclhw\" (UniqueName: \"kubernetes.io/projected/3d5bdd76-c116-469f-84a1-c869e4ffb5ce-kube-api-access-bclhw\") pod \"nova-cell0-conductor-0\" (UID: \"3d5bdd76-c116-469f-84a1-c869e4ffb5ce\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.726030 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5bdd76-c116-469f-84a1-c869e4ffb5ce-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3d5bdd76-c116-469f-84a1-c869e4ffb5ce\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.726074 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5bdd76-c116-469f-84a1-c869e4ffb5ce-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3d5bdd76-c116-469f-84a1-c869e4ffb5ce\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.726117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bclhw\" (UniqueName: \"kubernetes.io/projected/3d5bdd76-c116-469f-84a1-c869e4ffb5ce-kube-api-access-bclhw\") pod \"nova-cell0-conductor-0\" (UID: \"3d5bdd76-c116-469f-84a1-c869e4ffb5ce\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.729537 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5bdd76-c116-469f-84a1-c869e4ffb5ce-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3d5bdd76-c116-469f-84a1-c869e4ffb5ce\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.731227 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5bdd76-c116-469f-84a1-c869e4ffb5ce-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3d5bdd76-c116-469f-84a1-c869e4ffb5ce\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.747531 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bclhw\" (UniqueName: \"kubernetes.io/projected/3d5bdd76-c116-469f-84a1-c869e4ffb5ce-kube-api-access-bclhw\") pod \"nova-cell0-conductor-0\" (UID: \"3d5bdd76-c116-469f-84a1-c869e4ffb5ce\") " pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:51 crc kubenswrapper[4796]: I1125 14:47:51.898689 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:52 crc kubenswrapper[4796]: I1125 14:47:52.350796 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 14:47:52 crc kubenswrapper[4796]: I1125 14:47:52.430461 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="596b66cb-f7ca-4a23-969a-81168ad1d8b1" path="/var/lib/kubelet/pods/596b66cb-f7ca-4a23-969a-81168ad1d8b1/volumes" Nov 25 14:47:52 crc kubenswrapper[4796]: I1125 14:47:52.521329 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3d5bdd76-c116-469f-84a1-c869e4ffb5ce","Type":"ContainerStarted","Data":"c17fef44fe2a80c92ec02b0f10ccce1ee17d828ee74e34b1ac4204d67d1fd0fe"} Nov 25 14:47:53 crc kubenswrapper[4796]: I1125 14:47:53.536783 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3d5bdd76-c116-469f-84a1-c869e4ffb5ce","Type":"ContainerStarted","Data":"759b19afbc662a97ce53134cfac969886ac5c2690d2384bfbfe4991b597fed32"} Nov 25 14:47:53 crc kubenswrapper[4796]: I1125 14:47:53.537101 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 14:47:53 crc kubenswrapper[4796]: I1125 14:47:53.561597 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.56156614 podStartE2EDuration="2.56156614s" podCreationTimestamp="2025-11-25 14:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:47:53.550555055 +0000 UTC m=+1401.893664509" watchObservedRunningTime="2025-11-25 14:47:53.56156614 +0000 UTC m=+1401.904675564" Nov 25 14:47:53 crc kubenswrapper[4796]: I1125 14:47:53.930447 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.558237 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zbqbg"] Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.564134 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.579006 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbqbg"] Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.614315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgk6d\" (UniqueName: \"kubernetes.io/projected/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-kube-api-access-pgk6d\") pod \"redhat-operators-zbqbg\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.614406 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-utilities\") pod \"redhat-operators-zbqbg\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.614483 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-catalog-content\") pod \"redhat-operators-zbqbg\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.716748 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgk6d\" (UniqueName: \"kubernetes.io/projected/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-kube-api-access-pgk6d\") pod \"redhat-operators-zbqbg\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.717024 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-utilities\") pod \"redhat-operators-zbqbg\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.717154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-catalog-content\") pod \"redhat-operators-zbqbg\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.717724 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-catalog-content\") pod \"redhat-operators-zbqbg\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.717881 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-utilities\") pod \"redhat-operators-zbqbg\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.739447 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgk6d\" (UniqueName: \"kubernetes.io/projected/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-kube-api-access-pgk6d\") pod \"redhat-operators-zbqbg\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:55 crc kubenswrapper[4796]: I1125 14:47:55.900005 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:47:56 crc kubenswrapper[4796]: I1125 14:47:56.394123 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbqbg"] Nov 25 14:47:56 crc kubenswrapper[4796]: I1125 14:47:56.595326 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbqbg" event={"ID":"b13c00ac-c5a5-413c-8df6-a1b7111a87a3","Type":"ContainerStarted","Data":"cfa51c9693c404f618b97e3b2810e972ef4906d4a8bfaa225b77feeb85a650cf"} Nov 25 14:47:57 crc kubenswrapper[4796]: I1125 14:47:57.607918 4796 generic.go:334] "Generic (PLEG): container finished" podID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerID="33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1" exitCode=0 Nov 25 14:47:57 crc kubenswrapper[4796]: I1125 14:47:57.608030 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbqbg" event={"ID":"b13c00ac-c5a5-413c-8df6-a1b7111a87a3","Type":"ContainerDied","Data":"33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1"} Nov 25 14:47:57 crc kubenswrapper[4796]: I1125 14:47:57.697632 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 14:47:57 crc kubenswrapper[4796]: I1125 14:47:57.697883 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="93d132ee-a59b-4244-8d56-895b7a49b14d" containerName="kube-state-metrics" containerID="cri-o://c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048" gracePeriod=30 Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.263802 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.393792 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgqw2\" (UniqueName: \"kubernetes.io/projected/93d132ee-a59b-4244-8d56-895b7a49b14d-kube-api-access-bgqw2\") pod \"93d132ee-a59b-4244-8d56-895b7a49b14d\" (UID: \"93d132ee-a59b-4244-8d56-895b7a49b14d\") " Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.398661 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d132ee-a59b-4244-8d56-895b7a49b14d-kube-api-access-bgqw2" (OuterVolumeSpecName: "kube-api-access-bgqw2") pod "93d132ee-a59b-4244-8d56-895b7a49b14d" (UID: "93d132ee-a59b-4244-8d56-895b7a49b14d"). InnerVolumeSpecName "kube-api-access-bgqw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.497797 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgqw2\" (UniqueName: \"kubernetes.io/projected/93d132ee-a59b-4244-8d56-895b7a49b14d-kube-api-access-bgqw2\") on node \"crc\" DevicePath \"\"" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.623628 4796 generic.go:334] "Generic (PLEG): container finished" podID="93d132ee-a59b-4244-8d56-895b7a49b14d" containerID="c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048" exitCode=2 Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.623689 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"93d132ee-a59b-4244-8d56-895b7a49b14d","Type":"ContainerDied","Data":"c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048"} Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.623764 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"93d132ee-a59b-4244-8d56-895b7a49b14d","Type":"ContainerDied","Data":"b7d14141aec75a925f2f9b736fa8d743a0ae98929084c261d535282e097f58c0"} Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.623789 4796 scope.go:117] "RemoveContainer" containerID="c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.623799 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.653047 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.668648 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.675770 4796 scope.go:117] "RemoveContainer" containerID="c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048" Nov 25 14:47:58 crc kubenswrapper[4796]: E1125 14:47:58.679479 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048\": container with ID starting with c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048 not found: ID does not exist" containerID="c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.679537 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048"} err="failed to get container status \"c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048\": rpc error: code = NotFound desc = could not find container \"c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048\": container with ID starting with c054952d82d41cf9617850fa4e893e32d90a611f3fd33de5b8aac3e521064048 not found: ID does not exist" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.679713 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 14:47:58 crc kubenswrapper[4796]: E1125 14:47:58.680080 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d132ee-a59b-4244-8d56-895b7a49b14d" containerName="kube-state-metrics" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.680092 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d132ee-a59b-4244-8d56-895b7a49b14d" containerName="kube-state-metrics" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.680267 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d132ee-a59b-4244-8d56-895b7a49b14d" containerName="kube-state-metrics" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.680913 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.686480 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.686772 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.699868 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.805206 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf22d\" (UniqueName: \"kubernetes.io/projected/da9248b8-0e46-4c9a-837c-b5591fc3e559-kube-api-access-pf22d\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.805442 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/da9248b8-0e46-4c9a-837c-b5591fc3e559-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.805492 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9248b8-0e46-4c9a-837c-b5591fc3e559-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.805622 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/da9248b8-0e46-4c9a-837c-b5591fc3e559-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.907064 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/da9248b8-0e46-4c9a-837c-b5591fc3e559-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.907490 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9248b8-0e46-4c9a-837c-b5591fc3e559-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.907590 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/da9248b8-0e46-4c9a-837c-b5591fc3e559-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.907655 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf22d\" (UniqueName: \"kubernetes.io/projected/da9248b8-0e46-4c9a-837c-b5591fc3e559-kube-api-access-pf22d\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.910779 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/da9248b8-0e46-4c9a-837c-b5591fc3e559-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.911896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/da9248b8-0e46-4c9a-837c-b5591fc3e559-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.927025 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9248b8-0e46-4c9a-837c-b5591fc3e559-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:58 crc kubenswrapper[4796]: I1125 14:47:58.935774 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf22d\" (UniqueName: \"kubernetes.io/projected/da9248b8-0e46-4c9a-837c-b5591fc3e559-kube-api-access-pf22d\") pod \"kube-state-metrics-0\" (UID: \"da9248b8-0e46-4c9a-837c-b5591fc3e559\") " pod="openstack/kube-state-metrics-0" Nov 25 14:47:59 crc kubenswrapper[4796]: I1125 14:47:59.074114 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 14:47:59 crc kubenswrapper[4796]: I1125 14:47:59.605056 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:47:59 crc kubenswrapper[4796]: I1125 14:47:59.605506 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="ceilometer-central-agent" containerID="cri-o://daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe" gracePeriod=30 Nov 25 14:47:59 crc kubenswrapper[4796]: I1125 14:47:59.605583 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="proxy-httpd" containerID="cri-o://109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1" gracePeriod=30 Nov 25 14:47:59 crc kubenswrapper[4796]: I1125 14:47:59.605677 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="ceilometer-notification-agent" containerID="cri-o://3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd" gracePeriod=30 Nov 25 14:47:59 crc kubenswrapper[4796]: I1125 14:47:59.605757 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="sg-core" containerID="cri-o://e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819" gracePeriod=30 Nov 25 14:47:59 crc kubenswrapper[4796]: I1125 14:47:59.642168 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbqbg" event={"ID":"b13c00ac-c5a5-413c-8df6-a1b7111a87a3","Type":"ContainerStarted","Data":"d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c"} Nov 25 14:47:59 crc kubenswrapper[4796]: I1125 14:47:59.878170 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 14:48:00 crc kubenswrapper[4796]: I1125 14:48:00.426375 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d132ee-a59b-4244-8d56-895b7a49b14d" path="/var/lib/kubelet/pods/93d132ee-a59b-4244-8d56-895b7a49b14d/volumes" Nov 25 14:48:00 crc kubenswrapper[4796]: I1125 14:48:00.655694 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"da9248b8-0e46-4c9a-837c-b5591fc3e559","Type":"ContainerStarted","Data":"350602278f2230d8eead802e08e64037d91cebbbcc1b2f68314d71f9fff0ad70"} Nov 25 14:48:00 crc kubenswrapper[4796]: I1125 14:48:00.672292 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerID="109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1" exitCode=0 Nov 25 14:48:00 crc kubenswrapper[4796]: I1125 14:48:00.672363 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerID="e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819" exitCode=2 Nov 25 14:48:00 crc kubenswrapper[4796]: I1125 14:48:00.672480 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerDied","Data":"109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1"} Nov 25 14:48:00 crc kubenswrapper[4796]: I1125 14:48:00.672545 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerDied","Data":"e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819"} Nov 25 14:48:00 crc kubenswrapper[4796]: I1125 14:48:00.678659 4796 generic.go:334] "Generic (PLEG): container finished" podID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerID="d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c" exitCode=0 Nov 25 14:48:00 crc kubenswrapper[4796]: I1125 14:48:00.678723 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbqbg" event={"ID":"b13c00ac-c5a5-413c-8df6-a1b7111a87a3","Type":"ContainerDied","Data":"d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c"} Nov 25 14:48:01 crc kubenswrapper[4796]: I1125 14:48:01.693554 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerID="daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe" exitCode=0 Nov 25 14:48:01 crc kubenswrapper[4796]: I1125 14:48:01.693619 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerDied","Data":"daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe"} Nov 25 14:48:01 crc kubenswrapper[4796]: I1125 14:48:01.934793 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.739283 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-r97qb"] Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.741260 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.745364 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.745627 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.751180 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-r97qb"] Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.785136 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.785262 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-config-data\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.785297 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7vf\" (UniqueName: \"kubernetes.io/projected/cf005684-c69a-4402-8a4d-82ea423e1902-kube-api-access-rl7vf\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.785438 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-scripts\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.886837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.886907 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-config-data\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.886928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl7vf\" (UniqueName: \"kubernetes.io/projected/cf005684-c69a-4402-8a4d-82ea423e1902-kube-api-access-rl7vf\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.887001 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-scripts\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.902741 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-scripts\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.903389 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-config-data\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.903669 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.913772 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl7vf\" (UniqueName: \"kubernetes.io/projected/cf005684-c69a-4402-8a4d-82ea423e1902-kube-api-access-rl7vf\") pod \"nova-cell0-cell-mapping-r97qb\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.958554 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.959798 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.967192 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.976428 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.978500 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.988880 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.988952 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nc55\" (UniqueName: \"kubernetes.io/projected/7d488015-7c7f-4601-a332-580819e6e571-kube-api-access-2nc55\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.988998 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:02 crc kubenswrapper[4796]: I1125 14:48:02.998870 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.005822 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.009702 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.015058 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.016841 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.022631 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.032260 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091267 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091341 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d275e6-1127-4816-9001-303d9d595ac6-logs\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091395 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c961c7-a1f1-4e65-a64d-26a1787ee053-logs\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091433 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9248\" (UniqueName: \"kubernetes.io/projected/18c961c7-a1f1-4e65-a64d-26a1787ee053-kube-api-access-h9248\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091465 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091557 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091599 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-config-data\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091625 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-config-data\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091671 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5dm\" (UniqueName: \"kubernetes.io/projected/f8d275e6-1127-4816-9001-303d9d595ac6-kube-api-access-st5dm\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.091724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nc55\" (UniqueName: \"kubernetes.io/projected/7d488015-7c7f-4601-a332-580819e6e571-kube-api-access-2nc55\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.093113 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.094428 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.104197 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.105605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.113639 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.117320 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nc55\" (UniqueName: \"kubernetes.io/projected/7d488015-7c7f-4601-a332-580819e6e571-kube-api-access-2nc55\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.118316 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.126529 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.175550 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ljhfb"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.177125 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.192829 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2mw4\" (UniqueName: \"kubernetes.io/projected/4108e6ff-21af-4a40-89bc-7224726ca1aa-kube-api-access-j2mw4\") pod \"nova-scheduler-0\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.192886 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.192911 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-config-data\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.192933 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-config-data\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.192968 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5dm\" (UniqueName: \"kubernetes.io/projected/f8d275e6-1127-4816-9001-303d9d595ac6-kube-api-access-st5dm\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.193029 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d275e6-1127-4816-9001-303d9d595ac6-logs\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.193043 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.193061 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c961c7-a1f1-4e65-a64d-26a1787ee053-logs\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.193087 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-config-data\") pod \"nova-scheduler-0\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.193108 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9248\" (UniqueName: \"kubernetes.io/projected/18c961c7-a1f1-4e65-a64d-26a1787ee053-kube-api-access-h9248\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.193136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.194075 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c961c7-a1f1-4e65-a64d-26a1787ee053-logs\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.194416 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d275e6-1127-4816-9001-303d9d595ac6-logs\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.197949 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.201950 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.206525 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-config-data\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.209159 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-config-data\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.211607 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ljhfb"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.219184 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5dm\" (UniqueName: \"kubernetes.io/projected/f8d275e6-1127-4816-9001-303d9d595ac6-kube-api-access-st5dm\") pod \"nova-metadata-0\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.224298 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9248\" (UniqueName: \"kubernetes.io/projected/18c961c7-a1f1-4e65-a64d-26a1787ee053-kube-api-access-h9248\") pod \"nova-api-0\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.295016 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.295359 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-svc\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.295400 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.295427 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.295524 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-config-data\") pod \"nova-scheduler-0\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.295589 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-config\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.295625 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcdfq\" (UniqueName: \"kubernetes.io/projected/bf4002d8-3c37-4da7-8abc-1c9167a7a275-kube-api-access-bcdfq\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.295672 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2mw4\" (UniqueName: \"kubernetes.io/projected/4108e6ff-21af-4a40-89bc-7224726ca1aa-kube-api-access-j2mw4\") pod \"nova-scheduler-0\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.295697 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.303051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-config-data\") pod \"nova-scheduler-0\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.305210 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.305980 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.321123 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2mw4\" (UniqueName: \"kubernetes.io/projected/4108e6ff-21af-4a40-89bc-7224726ca1aa-kube-api-access-j2mw4\") pod \"nova-scheduler-0\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.333759 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.362989 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.396934 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-config\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.396986 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcdfq\" (UniqueName: \"kubernetes.io/projected/bf4002d8-3c37-4da7-8abc-1c9167a7a275-kube-api-access-bcdfq\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.397035 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.397095 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-svc\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.397301 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.397334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.438730 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.438924 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcdfq\" (UniqueName: \"kubernetes.io/projected/bf4002d8-3c37-4da7-8abc-1c9167a7a275-kube-api-access-bcdfq\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.539214 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-config\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.540707 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.541028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-svc\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.543285 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-ljhfb\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.559078 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.572367 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.636699 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.709134 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-sg-core-conf-yaml\") pod \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.709234 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-run-httpd\") pod \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.709324 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-config-data\") pod \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.709360 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-combined-ca-bundle\") pod \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.709448 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-log-httpd\") pod \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.710123 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-scripts\") pod \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.710220 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k2pn\" (UniqueName: \"kubernetes.io/projected/9b8e40f8-47ff-49ed-abde-0ed532f677b7-kube-api-access-6k2pn\") pod \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\" (UID: \"9b8e40f8-47ff-49ed-abde-0ed532f677b7\") " Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.723026 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8e40f8-47ff-49ed-abde-0ed532f677b7-kube-api-access-6k2pn" (OuterVolumeSpecName: "kube-api-access-6k2pn") pod "9b8e40f8-47ff-49ed-abde-0ed532f677b7" (UID: "9b8e40f8-47ff-49ed-abde-0ed532f677b7"). InnerVolumeSpecName "kube-api-access-6k2pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.724753 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k2pn\" (UniqueName: \"kubernetes.io/projected/9b8e40f8-47ff-49ed-abde-0ed532f677b7-kube-api-access-6k2pn\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.725373 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9b8e40f8-47ff-49ed-abde-0ed532f677b7" (UID: "9b8e40f8-47ff-49ed-abde-0ed532f677b7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.729059 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9b8e40f8-47ff-49ed-abde-0ed532f677b7" (UID: "9b8e40f8-47ff-49ed-abde-0ed532f677b7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.743798 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-scripts" (OuterVolumeSpecName: "scripts") pod "9b8e40f8-47ff-49ed-abde-0ed532f677b7" (UID: "9b8e40f8-47ff-49ed-abde-0ed532f677b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.767639 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerID="3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd" exitCode=0 Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.767675 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerDied","Data":"3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd"} Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.767704 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8e40f8-47ff-49ed-abde-0ed532f677b7","Type":"ContainerDied","Data":"4ff138b8bc671655d02010b3f6fddf2f1ff2272f8b79eb47aa8d9e5a95eb1cfa"} Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.767720 4796 scope.go:117] "RemoveContainer" containerID="109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.767864 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.769832 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9b8e40f8-47ff-49ed-abde-0ed532f677b7" (UID: "9b8e40f8-47ff-49ed-abde-0ed532f677b7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.771641 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sd7sr"] Nov 25 14:48:03 crc kubenswrapper[4796]: E1125 14:48:03.772179 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="ceilometer-notification-agent" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.772194 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="ceilometer-notification-agent" Nov 25 14:48:03 crc kubenswrapper[4796]: E1125 14:48:03.772208 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="ceilometer-central-agent" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.772216 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="ceilometer-central-agent" Nov 25 14:48:03 crc kubenswrapper[4796]: E1125 14:48:03.772231 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="sg-core" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.772239 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="sg-core" Nov 25 14:48:03 crc kubenswrapper[4796]: E1125 14:48:03.772256 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="proxy-httpd" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.772263 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="proxy-httpd" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.772520 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="sg-core" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.772548 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="ceilometer-central-agent" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.772564 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="proxy-httpd" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.772597 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" containerName="ceilometer-notification-agent" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.773406 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.775722 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.776626 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.781282 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sd7sr"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.802533 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-r97qb"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.825999 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.826032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-scripts\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.826123 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-config-data\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.826200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptc46\" (UniqueName: \"kubernetes.io/projected/c0249bda-04f8-417e-bc09-c57484f3a607-kube-api-access-ptc46\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.826283 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.826301 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.826312 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8e40f8-47ff-49ed-abde-0ed532f677b7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.826323 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.856937 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b8e40f8-47ff-49ed-abde-0ed532f677b7" (UID: "9b8e40f8-47ff-49ed-abde-0ed532f677b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.878165 4796 scope.go:117] "RemoveContainer" containerID="e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.907398 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.938641 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-config-data\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.938748 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptc46\" (UniqueName: \"kubernetes.io/projected/c0249bda-04f8-417e-bc09-c57484f3a607-kube-api-access-ptc46\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.938944 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.938972 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-scripts\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.939179 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.941458 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-config-data" (OuterVolumeSpecName: "config-data") pod "9b8e40f8-47ff-49ed-abde-0ed532f677b7" (UID: "9b8e40f8-47ff-49ed-abde-0ed532f677b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.941746 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.943951 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-config-data\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.946322 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-scripts\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.959112 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptc46\" (UniqueName: \"kubernetes.io/projected/c0249bda-04f8-417e-bc09-c57484f3a607-kube-api-access-ptc46\") pod \"nova-cell1-conductor-db-sync-sd7sr\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.990281 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:03 crc kubenswrapper[4796]: I1125 14:48:03.998481 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.040825 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8e40f8-47ff-49ed-abde-0ed532f677b7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.109287 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.111963 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.119119 4796 scope.go:117] "RemoveContainer" containerID="3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd" Nov 25 14:48:04 crc kubenswrapper[4796]: W1125 14:48:04.123793 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18c961c7_a1f1_4e65_a64d_26a1787ee053.slice/crio-0ead1b7ed3d9e450a278d9e11feb439314b75af9614b8f40821d834246488fbf WatchSource:0}: Error finding container 0ead1b7ed3d9e450a278d9e11feb439314b75af9614b8f40821d834246488fbf: Status 404 returned error can't find the container with id 0ead1b7ed3d9e450a278d9e11feb439314b75af9614b8f40821d834246488fbf Nov 25 14:48:04 crc kubenswrapper[4796]: W1125 14:48:04.124608 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d488015_7c7f_4601_a332_580819e6e571.slice/crio-931fe2549977de5714a3ae15453b078b8fb3235e74070c7709a224329f97e24e WatchSource:0}: Error finding container 931fe2549977de5714a3ae15453b078b8fb3235e74070c7709a224329f97e24e: Status 404 returned error can't find the container with id 931fe2549977de5714a3ae15453b078b8fb3235e74070c7709a224329f97e24e Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.125167 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.145542 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.149454 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.151992 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.152191 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.162837 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.179377 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.242756 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ljhfb"] Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.246010 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.246099 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.246136 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.246177 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-log-httpd\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.246192 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898gx\" (UniqueName: \"kubernetes.io/projected/d778cf84-10a7-49cf-be96-d14d18e960e0-kube-api-access-898gx\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.246215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-run-httpd\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.246228 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-scripts\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.246258 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-config-data\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: W1125 14:48:04.285942 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf4002d8_3c37_4da7_8abc_1c9167a7a275.slice/crio-c9f20dc2b85cd3af2f81955535912fbde5a5c537028c7ac118e40145af741ccb WatchSource:0}: Error finding container c9f20dc2b85cd3af2f81955535912fbde5a5c537028c7ac118e40145af741ccb: Status 404 returned error can't find the container with id c9f20dc2b85cd3af2f81955535912fbde5a5c537028c7ac118e40145af741ccb Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.301281 4796 scope.go:117] "RemoveContainer" containerID="daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.319096 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.349615 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.349708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-log-httpd\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.349729 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898gx\" (UniqueName: \"kubernetes.io/projected/d778cf84-10a7-49cf-be96-d14d18e960e0-kube-api-access-898gx\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.349767 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-run-httpd\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.349783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-scripts\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.350811 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-log-httpd\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.351382 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-config-data\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.351764 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.351848 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.354945 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-run-httpd\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.355171 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.356342 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-scripts\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.356849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.357273 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-config-data\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.358839 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.373841 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898gx\" (UniqueName: \"kubernetes.io/projected/d778cf84-10a7-49cf-be96-d14d18e960e0-kube-api-access-898gx\") pod \"ceilometer-0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.405116 4796 scope.go:117] "RemoveContainer" containerID="109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1" Nov 25 14:48:04 crc kubenswrapper[4796]: E1125 14:48:04.406211 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1\": container with ID starting with 109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1 not found: ID does not exist" containerID="109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.406238 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1"} err="failed to get container status \"109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1\": rpc error: code = NotFound desc = could not find container \"109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1\": container with ID starting with 109263add3d6fa3ac83e5e7ce6c1ca72361b3f4fec9ced828cd50674ce6686d1 not found: ID does not exist" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.406257 4796 scope.go:117] "RemoveContainer" containerID="e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819" Nov 25 14:48:04 crc kubenswrapper[4796]: E1125 14:48:04.406757 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819\": container with ID starting with e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819 not found: ID does not exist" containerID="e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.406785 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819"} err="failed to get container status \"e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819\": rpc error: code = NotFound desc = could not find container \"e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819\": container with ID starting with e6637846eba9881605d8ab2822ecb5e9569633e643f1ba112cc52ba32dc78819 not found: ID does not exist" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.406805 4796 scope.go:117] "RemoveContainer" containerID="3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd" Nov 25 14:48:04 crc kubenswrapper[4796]: E1125 14:48:04.406991 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd\": container with ID starting with 3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd not found: ID does not exist" containerID="3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.407011 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd"} err="failed to get container status \"3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd\": rpc error: code = NotFound desc = could not find container \"3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd\": container with ID starting with 3317badbb80923386edcb26a0ca597e00f051cc0baa863a1078a8154f9130cfd not found: ID does not exist" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.407026 4796 scope.go:117] "RemoveContainer" containerID="daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe" Nov 25 14:48:04 crc kubenswrapper[4796]: E1125 14:48:04.407254 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe\": container with ID starting with daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe not found: ID does not exist" containerID="daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.407273 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe"} err="failed to get container status \"daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe\": rpc error: code = NotFound desc = could not find container \"daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe\": container with ID starting with daab74c9b02af58e3b9a33b61a7c19c759522bc7610149c40ad17571802500fe not found: ID does not exist" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.422563 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8e40f8-47ff-49ed-abde-0ed532f677b7" path="/var/lib/kubelet/pods/9b8e40f8-47ff-49ed-abde-0ed532f677b7/volumes" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.566652 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.757724 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sd7sr"] Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.791721 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" event={"ID":"bf4002d8-3c37-4da7-8abc-1c9167a7a275","Type":"ContainerStarted","Data":"c9f20dc2b85cd3af2f81955535912fbde5a5c537028c7ac118e40145af741ccb"} Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.801646 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18c961c7-a1f1-4e65-a64d-26a1787ee053","Type":"ContainerStarted","Data":"0ead1b7ed3d9e450a278d9e11feb439314b75af9614b8f40821d834246488fbf"} Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.807180 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sd7sr" event={"ID":"c0249bda-04f8-417e-bc09-c57484f3a607","Type":"ContainerStarted","Data":"b9d6ac5888c2c29e4b62579d00f0198da6df3f167ba705496029523e33e1b34f"} Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.814635 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7d488015-7c7f-4601-a332-580819e6e571","Type":"ContainerStarted","Data":"931fe2549977de5714a3ae15453b078b8fb3235e74070c7709a224329f97e24e"} Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.821934 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8d275e6-1127-4816-9001-303d9d595ac6","Type":"ContainerStarted","Data":"0f08eedfab38ebbfae3c8c3cbd146c28f607099b10454a9fd1b1f5885e5bf944"} Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.823217 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4108e6ff-21af-4a40-89bc-7224726ca1aa","Type":"ContainerStarted","Data":"1e060020e8e18834953cdeaa3fe35dd9119cc8a05edfec10340c6a51094dbe6e"} Nov 25 14:48:04 crc kubenswrapper[4796]: I1125 14:48:04.826231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r97qb" event={"ID":"cf005684-c69a-4402-8a4d-82ea423e1902","Type":"ContainerStarted","Data":"0dfbd796212a2c3be9ac895cf09e54ae57200e7e5c36b476deb0c9660365994b"} Nov 25 14:48:05 crc kubenswrapper[4796]: I1125 14:48:05.054482 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:05 crc kubenswrapper[4796]: W1125 14:48:05.058773 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd778cf84_10a7_49cf_be96_d14d18e960e0.slice/crio-f1d8a8b07cb2d81ef35dd21e243f1ff8dcd2da3c188cd2e01952b9410c9fd74e WatchSource:0}: Error finding container f1d8a8b07cb2d81ef35dd21e243f1ff8dcd2da3c188cd2e01952b9410c9fd74e: Status 404 returned error can't find the container with id f1d8a8b07cb2d81ef35dd21e243f1ff8dcd2da3c188cd2e01952b9410c9fd74e Nov 25 14:48:05 crc kubenswrapper[4796]: I1125 14:48:05.836688 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerStarted","Data":"f1d8a8b07cb2d81ef35dd21e243f1ff8dcd2da3c188cd2e01952b9410c9fd74e"} Nov 25 14:48:06 crc kubenswrapper[4796]: I1125 14:48:06.214835 4796 patch_prober.go:28] interesting pod/router-default-5444994796-c6rl5 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 14:48:06 crc kubenswrapper[4796]: I1125 14:48:06.215223 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-c6rl5" podUID="0162f2df-c29a-4c00-b445-67a9bae4c5ad" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 14:48:06 crc kubenswrapper[4796]: I1125 14:48:06.584238 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:06 crc kubenswrapper[4796]: I1125 14:48:06.595314 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 14:48:08 crc kubenswrapper[4796]: I1125 14:48:08.872804 4796 generic.go:334] "Generic (PLEG): container finished" podID="bf4002d8-3c37-4da7-8abc-1c9167a7a275" containerID="3de62eeb0cf5474fdc201d0d0ec8d96b79d532d0536dfd8dffb23e69d9739aae" exitCode=0 Nov 25 14:48:08 crc kubenswrapper[4796]: I1125 14:48:08.873402 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" event={"ID":"bf4002d8-3c37-4da7-8abc-1c9167a7a275","Type":"ContainerDied","Data":"3de62eeb0cf5474fdc201d0d0ec8d96b79d532d0536dfd8dffb23e69d9739aae"} Nov 25 14:48:08 crc kubenswrapper[4796]: I1125 14:48:08.879484 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sd7sr" event={"ID":"c0249bda-04f8-417e-bc09-c57484f3a607","Type":"ContainerStarted","Data":"58974433ef838816cc6c7ceab6c45bb4e2c9442206cf48282249f48517f0f8ca"} Nov 25 14:48:08 crc kubenswrapper[4796]: I1125 14:48:08.880977 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r97qb" event={"ID":"cf005684-c69a-4402-8a4d-82ea423e1902","Type":"ContainerStarted","Data":"2316935db37d883c3cf95cf2cefed00ee2aebccdd2c67757fb69b230a280b356"} Nov 25 14:48:08 crc kubenswrapper[4796]: I1125 14:48:08.922909 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-sd7sr" podStartSLOduration=5.922888169 podStartE2EDuration="5.922888169s" podCreationTimestamp="2025-11-25 14:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:08.914668901 +0000 UTC m=+1417.257778345" watchObservedRunningTime="2025-11-25 14:48:08.922888169 +0000 UTC m=+1417.265997593" Nov 25 14:48:08 crc kubenswrapper[4796]: I1125 14:48:08.945273 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-r97qb" podStartSLOduration=6.945247839 podStartE2EDuration="6.945247839s" podCreationTimestamp="2025-11-25 14:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:08.93343617 +0000 UTC m=+1417.276545594" watchObservedRunningTime="2025-11-25 14:48:08.945247839 +0000 UTC m=+1417.288357263" Nov 25 14:48:09 crc kubenswrapper[4796]: I1125 14:48:09.894455 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbqbg" event={"ID":"b13c00ac-c5a5-413c-8df6-a1b7111a87a3","Type":"ContainerStarted","Data":"52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296"} Nov 25 14:48:09 crc kubenswrapper[4796]: I1125 14:48:09.920932 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"da9248b8-0e46-4c9a-837c-b5591fc3e559","Type":"ContainerStarted","Data":"d832a6dfa48f9036893fc6eed3a6274fde19ac061cafbae1dcb86adb5d28a670"} Nov 25 14:48:09 crc kubenswrapper[4796]: I1125 14:48:09.921175 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 14:48:09 crc kubenswrapper[4796]: I1125 14:48:09.929639 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zbqbg" podStartSLOduration=3.511423265 podStartE2EDuration="14.929619447s" podCreationTimestamp="2025-11-25 14:47:55 +0000 UTC" firstStartedPulling="2025-11-25 14:47:57.611546905 +0000 UTC m=+1405.954656329" lastFinishedPulling="2025-11-25 14:48:09.029743077 +0000 UTC m=+1417.372852511" observedRunningTime="2025-11-25 14:48:09.92271762 +0000 UTC m=+1418.265827044" watchObservedRunningTime="2025-11-25 14:48:09.929619447 +0000 UTC m=+1418.272728871" Nov 25 14:48:09 crc kubenswrapper[4796]: I1125 14:48:09.939370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" event={"ID":"bf4002d8-3c37-4da7-8abc-1c9167a7a275","Type":"ContainerStarted","Data":"c03ee5fb490755c9b4cc444d5381ccf74fc326170c06df26139faf2ef97af89a"} Nov 25 14:48:09 crc kubenswrapper[4796]: I1125 14:48:09.940299 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:09 crc kubenswrapper[4796]: I1125 14:48:09.943034 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerStarted","Data":"90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12"} Nov 25 14:48:09 crc kubenswrapper[4796]: I1125 14:48:09.966867 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.786556221 podStartE2EDuration="11.966841643s" podCreationTimestamp="2025-11-25 14:47:58 +0000 UTC" firstStartedPulling="2025-11-25 14:47:59.890691267 +0000 UTC m=+1408.233800701" lastFinishedPulling="2025-11-25 14:48:09.070976679 +0000 UTC m=+1417.414086123" observedRunningTime="2025-11-25 14:48:09.953161684 +0000 UTC m=+1418.296271108" watchObservedRunningTime="2025-11-25 14:48:09.966841643 +0000 UTC m=+1418.309951087" Nov 25 14:48:09 crc kubenswrapper[4796]: I1125 14:48:09.974555 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" podStartSLOduration=6.974535585 podStartE2EDuration="6.974535585s" podCreationTimestamp="2025-11-25 14:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:09.973964497 +0000 UTC m=+1418.317073921" watchObservedRunningTime="2025-11-25 14:48:09.974535585 +0000 UTC m=+1418.317645009" Nov 25 14:48:14 crc kubenswrapper[4796]: I1125 14:48:14.996495 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4108e6ff-21af-4a40-89bc-7224726ca1aa","Type":"ContainerStarted","Data":"58df70f7d2e435b5820a91a302a03991470ad7d29853160cc7b5ce33ada5febc"} Nov 25 14:48:15 crc kubenswrapper[4796]: I1125 14:48:15.000735 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18c961c7-a1f1-4e65-a64d-26a1787ee053","Type":"ContainerStarted","Data":"26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a"} Nov 25 14:48:15 crc kubenswrapper[4796]: I1125 14:48:15.009486 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerStarted","Data":"6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8"} Nov 25 14:48:15 crc kubenswrapper[4796]: I1125 14:48:15.012169 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7d488015-7c7f-4601-a332-580819e6e571","Type":"ContainerStarted","Data":"a415fcb8a66ddea15366e1b0a06ead28f09f623219e55d09bfb932aacb4de794"} Nov 25 14:48:15 crc kubenswrapper[4796]: I1125 14:48:15.012340 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7d488015-7c7f-4601-a332-580819e6e571" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a415fcb8a66ddea15366e1b0a06ead28f09f623219e55d09bfb932aacb4de794" gracePeriod=30 Nov 25 14:48:15 crc kubenswrapper[4796]: I1125 14:48:15.018361 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8d275e6-1127-4816-9001-303d9d595ac6","Type":"ContainerStarted","Data":"0a24bfb6272b180cb81a5e7bd76e282a67601c0d4c5f3539ef5e7db6a67074d2"} Nov 25 14:48:15 crc kubenswrapper[4796]: I1125 14:48:15.018390 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.133456258 podStartE2EDuration="12.018375623s" podCreationTimestamp="2025-11-25 14:48:03 +0000 UTC" firstStartedPulling="2025-11-25 14:48:04.3481641 +0000 UTC m=+1412.691273524" lastFinishedPulling="2025-11-25 14:48:14.233083465 +0000 UTC m=+1422.576192889" observedRunningTime="2025-11-25 14:48:15.014739669 +0000 UTC m=+1423.357849093" watchObservedRunningTime="2025-11-25 14:48:15.018375623 +0000 UTC m=+1423.361485047" Nov 25 14:48:15 crc kubenswrapper[4796]: I1125 14:48:15.018415 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8d275e6-1127-4816-9001-303d9d595ac6","Type":"ContainerStarted","Data":"72a9c0731f917011e384c6353774bbf855c59dcffaaedefe0d53d540efc98345"} Nov 25 14:48:15 crc kubenswrapper[4796]: I1125 14:48:15.900679 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:48:15 crc kubenswrapper[4796]: I1125 14:48:15.901307 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:48:16 crc kubenswrapper[4796]: I1125 14:48:16.035385 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18c961c7-a1f1-4e65-a64d-26a1787ee053","Type":"ContainerStarted","Data":"141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed"} Nov 25 14:48:16 crc kubenswrapper[4796]: I1125 14:48:16.040456 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f8d275e6-1127-4816-9001-303d9d595ac6" containerName="nova-metadata-log" containerID="cri-o://72a9c0731f917011e384c6353774bbf855c59dcffaaedefe0d53d540efc98345" gracePeriod=30 Nov 25 14:48:16 crc kubenswrapper[4796]: I1125 14:48:16.040466 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerStarted","Data":"bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948"} Nov 25 14:48:16 crc kubenswrapper[4796]: I1125 14:48:16.040593 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f8d275e6-1127-4816-9001-303d9d595ac6" containerName="nova-metadata-metadata" containerID="cri-o://0a24bfb6272b180cb81a5e7bd76e282a67601c0d4c5f3539ef5e7db6a67074d2" gracePeriod=30 Nov 25 14:48:16 crc kubenswrapper[4796]: I1125 14:48:16.068403 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.107819333 podStartE2EDuration="14.068377477s" podCreationTimestamp="2025-11-25 14:48:02 +0000 UTC" firstStartedPulling="2025-11-25 14:48:04.273734358 +0000 UTC m=+1412.616843772" lastFinishedPulling="2025-11-25 14:48:14.234292492 +0000 UTC m=+1422.577401916" observedRunningTime="2025-11-25 14:48:15.042878911 +0000 UTC m=+1423.385988335" watchObservedRunningTime="2025-11-25 14:48:16.068377477 +0000 UTC m=+1424.411487161" Nov 25 14:48:16 crc kubenswrapper[4796]: I1125 14:48:16.068692 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.993723078 podStartE2EDuration="14.068684917s" podCreationTimestamp="2025-11-25 14:48:02 +0000 UTC" firstStartedPulling="2025-11-25 14:48:04.172956 +0000 UTC m=+1412.516065424" lastFinishedPulling="2025-11-25 14:48:14.247917819 +0000 UTC m=+1422.591027263" observedRunningTime="2025-11-25 14:48:16.05409401 +0000 UTC m=+1424.397203474" watchObservedRunningTime="2025-11-25 14:48:16.068684917 +0000 UTC m=+1424.411794361" Nov 25 14:48:16 crc kubenswrapper[4796]: I1125 14:48:16.082935 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.9852785429999997 podStartE2EDuration="14.082891632s" podCreationTimestamp="2025-11-25 14:48:02 +0000 UTC" firstStartedPulling="2025-11-25 14:48:04.118875675 +0000 UTC m=+1412.461985109" lastFinishedPulling="2025-11-25 14:48:14.216488774 +0000 UTC m=+1422.559598198" observedRunningTime="2025-11-25 14:48:16.077196724 +0000 UTC m=+1424.420306158" watchObservedRunningTime="2025-11-25 14:48:16.082891632 +0000 UTC m=+1424.426001056" Nov 25 14:48:16 crc kubenswrapper[4796]: I1125 14:48:16.956945 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zbqbg" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="registry-server" probeResult="failure" output=< Nov 25 14:48:16 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Nov 25 14:48:16 crc kubenswrapper[4796]: > Nov 25 14:48:17 crc kubenswrapper[4796]: I1125 14:48:17.056904 4796 generic.go:334] "Generic (PLEG): container finished" podID="f8d275e6-1127-4816-9001-303d9d595ac6" containerID="0a24bfb6272b180cb81a5e7bd76e282a67601c0d4c5f3539ef5e7db6a67074d2" exitCode=0 Nov 25 14:48:17 crc kubenswrapper[4796]: I1125 14:48:17.058201 4796 generic.go:334] "Generic (PLEG): container finished" podID="f8d275e6-1127-4816-9001-303d9d595ac6" containerID="72a9c0731f917011e384c6353774bbf855c59dcffaaedefe0d53d540efc98345" exitCode=143 Nov 25 14:48:17 crc kubenswrapper[4796]: I1125 14:48:17.059810 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8d275e6-1127-4816-9001-303d9d595ac6","Type":"ContainerDied","Data":"0a24bfb6272b180cb81a5e7bd76e282a67601c0d4c5f3539ef5e7db6a67074d2"} Nov 25 14:48:17 crc kubenswrapper[4796]: I1125 14:48:17.060008 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8d275e6-1127-4816-9001-303d9d595ac6","Type":"ContainerDied","Data":"72a9c0731f917011e384c6353774bbf855c59dcffaaedefe0d53d540efc98345"} Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.306074 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.334766 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.334841 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.369558 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.543866 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-config-data\") pod \"f8d275e6-1127-4816-9001-303d9d595ac6\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.543952 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-combined-ca-bundle\") pod \"f8d275e6-1127-4816-9001-303d9d595ac6\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.544063 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st5dm\" (UniqueName: \"kubernetes.io/projected/f8d275e6-1127-4816-9001-303d9d595ac6-kube-api-access-st5dm\") pod \"f8d275e6-1127-4816-9001-303d9d595ac6\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.544277 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d275e6-1127-4816-9001-303d9d595ac6-logs\") pod \"f8d275e6-1127-4816-9001-303d9d595ac6\" (UID: \"f8d275e6-1127-4816-9001-303d9d595ac6\") " Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.544814 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d275e6-1127-4816-9001-303d9d595ac6-logs" (OuterVolumeSpecName: "logs") pod "f8d275e6-1127-4816-9001-303d9d595ac6" (UID: "f8d275e6-1127-4816-9001-303d9d595ac6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.546341 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8d275e6-1127-4816-9001-303d9d595ac6-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.559808 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.567665 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d275e6-1127-4816-9001-303d9d595ac6-kube-api-access-st5dm" (OuterVolumeSpecName: "kube-api-access-st5dm") pod "f8d275e6-1127-4816-9001-303d9d595ac6" (UID: "f8d275e6-1127-4816-9001-303d9d595ac6"). InnerVolumeSpecName "kube-api-access-st5dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.573708 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.601389 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-config-data" (OuterVolumeSpecName: "config-data") pod "f8d275e6-1127-4816-9001-303d9d595ac6" (UID: "f8d275e6-1127-4816-9001-303d9d595ac6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.618997 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8d275e6-1127-4816-9001-303d9d595ac6" (UID: "f8d275e6-1127-4816-9001-303d9d595ac6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.647772 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.647804 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d275e6-1127-4816-9001-303d9d595ac6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.647819 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st5dm\" (UniqueName: \"kubernetes.io/projected/f8d275e6-1127-4816-9001-303d9d595ac6-kube-api-access-st5dm\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.656266 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-96sth"] Nov 25 14:48:18 crc kubenswrapper[4796]: I1125 14:48:18.656592 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" podUID="735e664e-6d33-446e-96dd-bd86dbe45ec3" containerName="dnsmasq-dns" containerID="cri-o://b25bd8f142a5225e6ea6fc9720f89d5e05a4a43db5f5ac132bf232d20b31b182" gracePeriod=10 Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.086848 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8d275e6-1127-4816-9001-303d9d595ac6","Type":"ContainerDied","Data":"0f08eedfab38ebbfae3c8c3cbd146c28f607099b10454a9fd1b1f5885e5bf944"} Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.087175 4796 scope.go:117] "RemoveContainer" containerID="0a24bfb6272b180cb81a5e7bd76e282a67601c0d4c5f3539ef5e7db6a67074d2" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.087001 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.099206 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.103913 4796 generic.go:334] "Generic (PLEG): container finished" podID="735e664e-6d33-446e-96dd-bd86dbe45ec3" containerID="b25bd8f142a5225e6ea6fc9720f89d5e05a4a43db5f5ac132bf232d20b31b182" exitCode=0 Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.104041 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" event={"ID":"735e664e-6d33-446e-96dd-bd86dbe45ec3","Type":"ContainerDied","Data":"b25bd8f142a5225e6ea6fc9720f89d5e05a4a43db5f5ac132bf232d20b31b182"} Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.117972 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerStarted","Data":"19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f"} Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.119050 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.165999 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.057928019 podStartE2EDuration="15.165976557s" podCreationTimestamp="2025-11-25 14:48:04 +0000 UTC" firstStartedPulling="2025-11-25 14:48:05.060444811 +0000 UTC m=+1413.403554235" lastFinishedPulling="2025-11-25 14:48:18.168493349 +0000 UTC m=+1426.511602773" observedRunningTime="2025-11-25 14:48:19.161280249 +0000 UTC m=+1427.504389673" watchObservedRunningTime="2025-11-25 14:48:19.165976557 +0000 UTC m=+1427.509085981" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.188355 4796 scope.go:117] "RemoveContainer" containerID="72a9c0731f917011e384c6353774bbf855c59dcffaaedefe0d53d540efc98345" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.203762 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.228595 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.237542 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:19 crc kubenswrapper[4796]: E1125 14:48:19.238019 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d275e6-1127-4816-9001-303d9d595ac6" containerName="nova-metadata-metadata" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.238035 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d275e6-1127-4816-9001-303d9d595ac6" containerName="nova-metadata-metadata" Nov 25 14:48:19 crc kubenswrapper[4796]: E1125 14:48:19.238065 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d275e6-1127-4816-9001-303d9d595ac6" containerName="nova-metadata-log" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.238072 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d275e6-1127-4816-9001-303d9d595ac6" containerName="nova-metadata-log" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.238293 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d275e6-1127-4816-9001-303d9d595ac6" containerName="nova-metadata-metadata" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.238307 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d275e6-1127-4816-9001-303d9d595ac6" containerName="nova-metadata-log" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.239421 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.250301 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.252320 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.256927 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.281870 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.367563 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf9qp\" (UniqueName: \"kubernetes.io/projected/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-kube-api-access-wf9qp\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.367684 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.367708 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-config-data\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.367725 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.367772 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-logs\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.469526 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-config\") pod \"735e664e-6d33-446e-96dd-bd86dbe45ec3\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.469704 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-sb\") pod \"735e664e-6d33-446e-96dd-bd86dbe45ec3\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.469751 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-svc\") pod \"735e664e-6d33-446e-96dd-bd86dbe45ec3\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.469850 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvwb4\" (UniqueName: \"kubernetes.io/projected/735e664e-6d33-446e-96dd-bd86dbe45ec3-kube-api-access-fvwb4\") pod \"735e664e-6d33-446e-96dd-bd86dbe45ec3\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.469914 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-swift-storage-0\") pod \"735e664e-6d33-446e-96dd-bd86dbe45ec3\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.469943 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-nb\") pod \"735e664e-6d33-446e-96dd-bd86dbe45ec3\" (UID: \"735e664e-6d33-446e-96dd-bd86dbe45ec3\") " Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.470206 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf9qp\" (UniqueName: \"kubernetes.io/projected/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-kube-api-access-wf9qp\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.470306 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.470334 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-config-data\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.470358 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.470422 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-logs\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.471428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-logs\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.498992 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735e664e-6d33-446e-96dd-bd86dbe45ec3-kube-api-access-fvwb4" (OuterVolumeSpecName: "kube-api-access-fvwb4") pod "735e664e-6d33-446e-96dd-bd86dbe45ec3" (UID: "735e664e-6d33-446e-96dd-bd86dbe45ec3"). InnerVolumeSpecName "kube-api-access-fvwb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.500690 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.500712 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-config-data\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.531329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf9qp\" (UniqueName: \"kubernetes.io/projected/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-kube-api-access-wf9qp\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.537273 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.572938 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "735e664e-6d33-446e-96dd-bd86dbe45ec3" (UID: "735e664e-6d33-446e-96dd-bd86dbe45ec3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.574416 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvwb4\" (UniqueName: \"kubernetes.io/projected/735e664e-6d33-446e-96dd-bd86dbe45ec3-kube-api-access-fvwb4\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.574444 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.579952 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "735e664e-6d33-446e-96dd-bd86dbe45ec3" (UID: "735e664e-6d33-446e-96dd-bd86dbe45ec3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.585966 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-config" (OuterVolumeSpecName: "config") pod "735e664e-6d33-446e-96dd-bd86dbe45ec3" (UID: "735e664e-6d33-446e-96dd-bd86dbe45ec3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.597036 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.599557 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "735e664e-6d33-446e-96dd-bd86dbe45ec3" (UID: "735e664e-6d33-446e-96dd-bd86dbe45ec3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.613382 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "735e664e-6d33-446e-96dd-bd86dbe45ec3" (UID: "735e664e-6d33-446e-96dd-bd86dbe45ec3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.676117 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.676715 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.676967 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:19 crc kubenswrapper[4796]: I1125 14:48:19.677091 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/735e664e-6d33-446e-96dd-bd86dbe45ec3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.090898 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.133528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0","Type":"ContainerStarted","Data":"871d610fabc8887e189ddf2486e8b70fa5fe6766172f7130db8e7bfca8717b55"} Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.136637 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.140299 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-96sth" event={"ID":"735e664e-6d33-446e-96dd-bd86dbe45ec3","Type":"ContainerDied","Data":"adf6cceec43e837ba7724fd7422def760ed13c685ef7e96c7511cb0a13d06f45"} Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.140646 4796 scope.go:117] "RemoveContainer" containerID="b25bd8f142a5225e6ea6fc9720f89d5e05a4a43db5f5ac132bf232d20b31b182" Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.169653 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-96sth"] Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.180610 4796 scope.go:117] "RemoveContainer" containerID="02a03b6f6600327c3c7299cdd159c7738d649e4be9e59a51d99dd4d4863024c5" Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.191662 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-96sth"] Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.427302 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735e664e-6d33-446e-96dd-bd86dbe45ec3" path="/var/lib/kubelet/pods/735e664e-6d33-446e-96dd-bd86dbe45ec3/volumes" Nov 25 14:48:20 crc kubenswrapper[4796]: I1125 14:48:20.428427 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d275e6-1127-4816-9001-303d9d595ac6" path="/var/lib/kubelet/pods/f8d275e6-1127-4816-9001-303d9d595ac6/volumes" Nov 25 14:48:21 crc kubenswrapper[4796]: I1125 14:48:21.160538 4796 generic.go:334] "Generic (PLEG): container finished" podID="cf005684-c69a-4402-8a4d-82ea423e1902" containerID="2316935db37d883c3cf95cf2cefed00ee2aebccdd2c67757fb69b230a280b356" exitCode=0 Nov 25 14:48:21 crc kubenswrapper[4796]: I1125 14:48:21.160709 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r97qb" event={"ID":"cf005684-c69a-4402-8a4d-82ea423e1902","Type":"ContainerDied","Data":"2316935db37d883c3cf95cf2cefed00ee2aebccdd2c67757fb69b230a280b356"} Nov 25 14:48:21 crc kubenswrapper[4796]: I1125 14:48:21.164825 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0","Type":"ContainerStarted","Data":"68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324"} Nov 25 14:48:21 crc kubenswrapper[4796]: I1125 14:48:21.164877 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0","Type":"ContainerStarted","Data":"ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820"} Nov 25 14:48:21 crc kubenswrapper[4796]: I1125 14:48:21.222075 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.222047758 podStartE2EDuration="2.222047758s" podCreationTimestamp="2025-11-25 14:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:21.209667501 +0000 UTC m=+1429.552776975" watchObservedRunningTime="2025-11-25 14:48:21.222047758 +0000 UTC m=+1429.565157202" Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.590546 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.760240 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-config-data\") pod \"cf005684-c69a-4402-8a4d-82ea423e1902\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.760298 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-scripts\") pod \"cf005684-c69a-4402-8a4d-82ea423e1902\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.760627 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl7vf\" (UniqueName: \"kubernetes.io/projected/cf005684-c69a-4402-8a4d-82ea423e1902-kube-api-access-rl7vf\") pod \"cf005684-c69a-4402-8a4d-82ea423e1902\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.760732 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-combined-ca-bundle\") pod \"cf005684-c69a-4402-8a4d-82ea423e1902\" (UID: \"cf005684-c69a-4402-8a4d-82ea423e1902\") " Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.765979 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf005684-c69a-4402-8a4d-82ea423e1902-kube-api-access-rl7vf" (OuterVolumeSpecName: "kube-api-access-rl7vf") pod "cf005684-c69a-4402-8a4d-82ea423e1902" (UID: "cf005684-c69a-4402-8a4d-82ea423e1902"). InnerVolumeSpecName "kube-api-access-rl7vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.768745 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-scripts" (OuterVolumeSpecName: "scripts") pod "cf005684-c69a-4402-8a4d-82ea423e1902" (UID: "cf005684-c69a-4402-8a4d-82ea423e1902"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.793831 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf005684-c69a-4402-8a4d-82ea423e1902" (UID: "cf005684-c69a-4402-8a4d-82ea423e1902"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.803063 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-config-data" (OuterVolumeSpecName: "config-data") pod "cf005684-c69a-4402-8a4d-82ea423e1902" (UID: "cf005684-c69a-4402-8a4d-82ea423e1902"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.863409 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl7vf\" (UniqueName: \"kubernetes.io/projected/cf005684-c69a-4402-8a4d-82ea423e1902-kube-api-access-rl7vf\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.863456 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.863473 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:22 crc kubenswrapper[4796]: I1125 14:48:22.863488 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf005684-c69a-4402-8a4d-82ea423e1902-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.189231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-r97qb" event={"ID":"cf005684-c69a-4402-8a4d-82ea423e1902","Type":"ContainerDied","Data":"0dfbd796212a2c3be9ac895cf09e54ae57200e7e5c36b476deb0c9660365994b"} Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.189295 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dfbd796212a2c3be9ac895cf09e54ae57200e7e5c36b476deb0c9660365994b" Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.189394 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-r97qb" Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.364169 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.364248 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.385811 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.386108 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4108e6ff-21af-4a40-89bc-7224726ca1aa" containerName="nova-scheduler-scheduler" containerID="cri-o://58df70f7d2e435b5820a91a302a03991470ad7d29853160cc7b5ce33ada5febc" gracePeriod=30 Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.398934 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.419294 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.419543 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerName="nova-metadata-log" containerID="cri-o://ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820" gracePeriod=30 Nov 25 14:48:23 crc kubenswrapper[4796]: I1125 14:48:23.419645 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerName="nova-metadata-metadata" containerID="cri-o://68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324" gracePeriod=30 Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.021754 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.189469 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf9qp\" (UniqueName: \"kubernetes.io/projected/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-kube-api-access-wf9qp\") pod \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.189664 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-combined-ca-bundle\") pod \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.190556 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-logs\") pod \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.190707 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-config-data\") pod \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.190743 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-nova-metadata-tls-certs\") pod \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\" (UID: \"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0\") " Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.191148 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-logs" (OuterVolumeSpecName: "logs") pod "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" (UID: "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.203790 4796 generic.go:334] "Generic (PLEG): container finished" podID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerID="68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324" exitCode=0 Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.203827 4796 generic.go:334] "Generic (PLEG): container finished" podID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerID="ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820" exitCode=143 Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.203895 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0","Type":"ContainerDied","Data":"68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324"} Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.203926 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0","Type":"ContainerDied","Data":"ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820"} Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.203937 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0","Type":"ContainerDied","Data":"871d610fabc8887e189ddf2486e8b70fa5fe6766172f7130db8e7bfca8717b55"} Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.203951 4796 scope.go:117] "RemoveContainer" containerID="68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.204089 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.207119 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sd7sr" event={"ID":"c0249bda-04f8-417e-bc09-c57484f3a607","Type":"ContainerDied","Data":"58974433ef838816cc6c7ceab6c45bb4e2c9442206cf48282249f48517f0f8ca"} Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.207020 4796 generic.go:334] "Generic (PLEG): container finished" podID="c0249bda-04f8-417e-bc09-c57484f3a607" containerID="58974433ef838816cc6c7ceab6c45bb4e2c9442206cf48282249f48517f0f8ca" exitCode=0 Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.207908 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-log" containerID="cri-o://26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a" gracePeriod=30 Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.207926 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-api" containerID="cri-o://141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed" gracePeriod=30 Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.211395 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-kube-api-access-wf9qp" (OuterVolumeSpecName: "kube-api-access-wf9qp") pod "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" (UID: "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0"). InnerVolumeSpecName "kube-api-access-wf9qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.223452 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.223481 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.233034 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" (UID: "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.238311 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-config-data" (OuterVolumeSpecName: "config-data") pod "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" (UID: "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.251171 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" (UID: "1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.294228 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.294258 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.294269 4796 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.294279 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf9qp\" (UniqueName: \"kubernetes.io/projected/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-kube-api-access-wf9qp\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.294287 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.319363 4796 scope.go:117] "RemoveContainer" containerID="ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.337093 4796 scope.go:117] "RemoveContainer" containerID="68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324" Nov 25 14:48:24 crc kubenswrapper[4796]: E1125 14:48:24.337841 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324\": container with ID starting with 68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324 not found: ID does not exist" containerID="68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.337946 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324"} err="failed to get container status \"68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324\": rpc error: code = NotFound desc = could not find container \"68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324\": container with ID starting with 68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324 not found: ID does not exist" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.338019 4796 scope.go:117] "RemoveContainer" containerID="ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820" Nov 25 14:48:24 crc kubenswrapper[4796]: E1125 14:48:24.338298 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820\": container with ID starting with ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820 not found: ID does not exist" containerID="ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.338325 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820"} err="failed to get container status \"ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820\": rpc error: code = NotFound desc = could not find container \"ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820\": container with ID starting with ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820 not found: ID does not exist" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.338340 4796 scope.go:117] "RemoveContainer" containerID="68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.338629 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324"} err="failed to get container status \"68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324\": rpc error: code = NotFound desc = could not find container \"68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324\": container with ID starting with 68e35ef617c89b38c22410a8788b4f98c98c951534d647b22ce906b68411c324 not found: ID does not exist" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.338656 4796 scope.go:117] "RemoveContainer" containerID="ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.338902 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820"} err="failed to get container status \"ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820\": rpc error: code = NotFound desc = could not find container \"ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820\": container with ID starting with ed9e826491f56e6a38e8d2b54e48aab1da09e98b16fa72c937269302f24dc820 not found: ID does not exist" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.525088 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.535520 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.559340 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:24 crc kubenswrapper[4796]: E1125 14:48:24.559853 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerName="nova-metadata-log" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.559869 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerName="nova-metadata-log" Nov 25 14:48:24 crc kubenswrapper[4796]: E1125 14:48:24.559888 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735e664e-6d33-446e-96dd-bd86dbe45ec3" containerName="init" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.559897 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="735e664e-6d33-446e-96dd-bd86dbe45ec3" containerName="init" Nov 25 14:48:24 crc kubenswrapper[4796]: E1125 14:48:24.559909 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerName="nova-metadata-metadata" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.559915 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerName="nova-metadata-metadata" Nov 25 14:48:24 crc kubenswrapper[4796]: E1125 14:48:24.559932 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735e664e-6d33-446e-96dd-bd86dbe45ec3" containerName="dnsmasq-dns" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.559939 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="735e664e-6d33-446e-96dd-bd86dbe45ec3" containerName="dnsmasq-dns" Nov 25 14:48:24 crc kubenswrapper[4796]: E1125 14:48:24.559949 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf005684-c69a-4402-8a4d-82ea423e1902" containerName="nova-manage" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.559955 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf005684-c69a-4402-8a4d-82ea423e1902" containerName="nova-manage" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.560137 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf005684-c69a-4402-8a4d-82ea423e1902" containerName="nova-manage" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.560153 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="735e664e-6d33-446e-96dd-bd86dbe45ec3" containerName="dnsmasq-dns" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.560162 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerName="nova-metadata-log" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.560180 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" containerName="nova-metadata-metadata" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.561107 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.564024 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.577914 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.580520 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.702591 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.702635 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-logs\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.702666 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls7mr\" (UniqueName: \"kubernetes.io/projected/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-kube-api-access-ls7mr\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.702797 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-config-data\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.702825 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.804289 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.804352 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-logs\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.804393 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls7mr\" (UniqueName: \"kubernetes.io/projected/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-kube-api-access-ls7mr\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.804511 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-config-data\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.804550 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.804828 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-logs\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.810966 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-config-data\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.812621 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.812668 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.835133 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls7mr\" (UniqueName: \"kubernetes.io/projected/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-kube-api-access-ls7mr\") pod \"nova-metadata-0\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " pod="openstack/nova-metadata-0" Nov 25 14:48:24 crc kubenswrapper[4796]: I1125 14:48:24.907500 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.218824 4796 generic.go:334] "Generic (PLEG): container finished" podID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerID="26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a" exitCode=143 Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.218894 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18c961c7-a1f1-4e65-a64d-26a1787ee053","Type":"ContainerDied","Data":"26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a"} Nov 25 14:48:25 crc kubenswrapper[4796]: W1125 14:48:25.360463 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7411dd4_cc53_4a32_82ea_03b3b51dbd55.slice/crio-2b9fd5ec6465b483150672fca922e3258eb123dcbf863f3d5c3b53caf4bc8603 WatchSource:0}: Error finding container 2b9fd5ec6465b483150672fca922e3258eb123dcbf863f3d5c3b53caf4bc8603: Status 404 returned error can't find the container with id 2b9fd5ec6465b483150672fca922e3258eb123dcbf863f3d5c3b53caf4bc8603 Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.361417 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.518505 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.619807 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-scripts\") pod \"c0249bda-04f8-417e-bc09-c57484f3a607\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.620283 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptc46\" (UniqueName: \"kubernetes.io/projected/c0249bda-04f8-417e-bc09-c57484f3a607-kube-api-access-ptc46\") pod \"c0249bda-04f8-417e-bc09-c57484f3a607\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.620465 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-combined-ca-bundle\") pod \"c0249bda-04f8-417e-bc09-c57484f3a607\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.620502 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-config-data\") pod \"c0249bda-04f8-417e-bc09-c57484f3a607\" (UID: \"c0249bda-04f8-417e-bc09-c57484f3a607\") " Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.623872 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-scripts" (OuterVolumeSpecName: "scripts") pod "c0249bda-04f8-417e-bc09-c57484f3a607" (UID: "c0249bda-04f8-417e-bc09-c57484f3a607"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.626728 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0249bda-04f8-417e-bc09-c57484f3a607-kube-api-access-ptc46" (OuterVolumeSpecName: "kube-api-access-ptc46") pod "c0249bda-04f8-417e-bc09-c57484f3a607" (UID: "c0249bda-04f8-417e-bc09-c57484f3a607"). InnerVolumeSpecName "kube-api-access-ptc46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.645737 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-config-data" (OuterVolumeSpecName: "config-data") pod "c0249bda-04f8-417e-bc09-c57484f3a607" (UID: "c0249bda-04f8-417e-bc09-c57484f3a607"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.648103 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0249bda-04f8-417e-bc09-c57484f3a607" (UID: "c0249bda-04f8-417e-bc09-c57484f3a607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.722630 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptc46\" (UniqueName: \"kubernetes.io/projected/c0249bda-04f8-417e-bc09-c57484f3a607-kube-api-access-ptc46\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.722656 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.722666 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:25 crc kubenswrapper[4796]: I1125 14:48:25.722674 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0249bda-04f8-417e-bc09-c57484f3a607-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.234082 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b7411dd4-cc53-4a32-82ea-03b3b51dbd55","Type":"ContainerStarted","Data":"1db5422d68c5080078d38372d84c6ea0effe0dfdfb647d99b56eb7278c921f77"} Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.234158 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b7411dd4-cc53-4a32-82ea-03b3b51dbd55","Type":"ContainerStarted","Data":"0813a9e9b442965f567f5d4190003390eb2e596b352dbeb05673ffb66ba926b2"} Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.234179 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b7411dd4-cc53-4a32-82ea-03b3b51dbd55","Type":"ContainerStarted","Data":"2b9fd5ec6465b483150672fca922e3258eb123dcbf863f3d5c3b53caf4bc8603"} Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.237174 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sd7sr" event={"ID":"c0249bda-04f8-417e-bc09-c57484f3a607","Type":"ContainerDied","Data":"b9d6ac5888c2c29e4b62579d00f0198da6df3f167ba705496029523e33e1b34f"} Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.237225 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9d6ac5888c2c29e4b62579d00f0198da6df3f167ba705496029523e33e1b34f" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.237331 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sd7sr" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.276549 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.276528061 podStartE2EDuration="2.276528061s" podCreationTimestamp="2025-11-25 14:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:26.264756742 +0000 UTC m=+1434.607866206" watchObservedRunningTime="2025-11-25 14:48:26.276528061 +0000 UTC m=+1434.619637485" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.331188 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 14:48:26 crc kubenswrapper[4796]: E1125 14:48:26.331760 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0249bda-04f8-417e-bc09-c57484f3a607" containerName="nova-cell1-conductor-db-sync" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.331788 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0249bda-04f8-417e-bc09-c57484f3a607" containerName="nova-cell1-conductor-db-sync" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.332088 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0249bda-04f8-417e-bc09-c57484f3a607" containerName="nova-cell1-conductor-db-sync" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.333411 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.338311 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.377026 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.418849 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0" path="/var/lib/kubelet/pods/1d4cd31e-5b48-4cb6-8d0e-a0ad92ef68f0/volumes" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.464833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtcn\" (UniqueName: \"kubernetes.io/projected/614944f2-a1d3-41e0-82a4-3182bd6770af-kube-api-access-smtcn\") pod \"nova-cell1-conductor-0\" (UID: \"614944f2-a1d3-41e0-82a4-3182bd6770af\") " pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.465050 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/614944f2-a1d3-41e0-82a4-3182bd6770af-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"614944f2-a1d3-41e0-82a4-3182bd6770af\") " pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.465132 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614944f2-a1d3-41e0-82a4-3182bd6770af-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"614944f2-a1d3-41e0-82a4-3182bd6770af\") " pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.567188 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smtcn\" (UniqueName: \"kubernetes.io/projected/614944f2-a1d3-41e0-82a4-3182bd6770af-kube-api-access-smtcn\") pod \"nova-cell1-conductor-0\" (UID: \"614944f2-a1d3-41e0-82a4-3182bd6770af\") " pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.567275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/614944f2-a1d3-41e0-82a4-3182bd6770af-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"614944f2-a1d3-41e0-82a4-3182bd6770af\") " pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.567302 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614944f2-a1d3-41e0-82a4-3182bd6770af-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"614944f2-a1d3-41e0-82a4-3182bd6770af\") " pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.579782 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/614944f2-a1d3-41e0-82a4-3182bd6770af-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"614944f2-a1d3-41e0-82a4-3182bd6770af\") " pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.583395 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614944f2-a1d3-41e0-82a4-3182bd6770af-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"614944f2-a1d3-41e0-82a4-3182bd6770af\") " pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.590734 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtcn\" (UniqueName: \"kubernetes.io/projected/614944f2-a1d3-41e0-82a4-3182bd6770af-kube-api-access-smtcn\") pod \"nova-cell1-conductor-0\" (UID: \"614944f2-a1d3-41e0-82a4-3182bd6770af\") " pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.674390 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:26 crc kubenswrapper[4796]: I1125 14:48:26.964545 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zbqbg" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="registry-server" probeResult="failure" output=< Nov 25 14:48:26 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Nov 25 14:48:26 crc kubenswrapper[4796]: > Nov 25 14:48:27 crc kubenswrapper[4796]: I1125 14:48:27.157968 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 14:48:27 crc kubenswrapper[4796]: W1125 14:48:27.161820 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod614944f2_a1d3_41e0_82a4_3182bd6770af.slice/crio-b6a7589b82cd260224d27f57e859a1917577105350408fce6cb4856bbbb8af00 WatchSource:0}: Error finding container b6a7589b82cd260224d27f57e859a1917577105350408fce6cb4856bbbb8af00: Status 404 returned error can't find the container with id b6a7589b82cd260224d27f57e859a1917577105350408fce6cb4856bbbb8af00 Nov 25 14:48:27 crc kubenswrapper[4796]: I1125 14:48:27.249331 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"614944f2-a1d3-41e0-82a4-3182bd6770af","Type":"ContainerStarted","Data":"b6a7589b82cd260224d27f57e859a1917577105350408fce6cb4856bbbb8af00"} Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.260749 4796 generic.go:334] "Generic (PLEG): container finished" podID="4108e6ff-21af-4a40-89bc-7224726ca1aa" containerID="58df70f7d2e435b5820a91a302a03991470ad7d29853160cc7b5ce33ada5febc" exitCode=0 Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.261271 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4108e6ff-21af-4a40-89bc-7224726ca1aa","Type":"ContainerDied","Data":"58df70f7d2e435b5820a91a302a03991470ad7d29853160cc7b5ce33ada5febc"} Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.263380 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"614944f2-a1d3-41e0-82a4-3182bd6770af","Type":"ContainerStarted","Data":"213cc841a9e09448d1c309230206cd06e8f73c58e398670c413f3f4844da4b91"} Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.263469 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.303320 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.303298854 podStartE2EDuration="2.303298854s" podCreationTimestamp="2025-11-25 14:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:28.283546435 +0000 UTC m=+1436.626655869" watchObservedRunningTime="2025-11-25 14:48:28.303298854 +0000 UTC m=+1436.646408288" Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.503182 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.508871 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2mw4\" (UniqueName: \"kubernetes.io/projected/4108e6ff-21af-4a40-89bc-7224726ca1aa-kube-api-access-j2mw4\") pod \"4108e6ff-21af-4a40-89bc-7224726ca1aa\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.509067 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-config-data\") pod \"4108e6ff-21af-4a40-89bc-7224726ca1aa\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.509247 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-combined-ca-bundle\") pod \"4108e6ff-21af-4a40-89bc-7224726ca1aa\" (UID: \"4108e6ff-21af-4a40-89bc-7224726ca1aa\") " Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.518698 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4108e6ff-21af-4a40-89bc-7224726ca1aa-kube-api-access-j2mw4" (OuterVolumeSpecName: "kube-api-access-j2mw4") pod "4108e6ff-21af-4a40-89bc-7224726ca1aa" (UID: "4108e6ff-21af-4a40-89bc-7224726ca1aa"). InnerVolumeSpecName "kube-api-access-j2mw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.556481 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-config-data" (OuterVolumeSpecName: "config-data") pod "4108e6ff-21af-4a40-89bc-7224726ca1aa" (UID: "4108e6ff-21af-4a40-89bc-7224726ca1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.562422 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4108e6ff-21af-4a40-89bc-7224726ca1aa" (UID: "4108e6ff-21af-4a40-89bc-7224726ca1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.612097 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.612136 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4108e6ff-21af-4a40-89bc-7224726ca1aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:28 crc kubenswrapper[4796]: I1125 14:48:28.612152 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2mw4\" (UniqueName: \"kubernetes.io/projected/4108e6ff-21af-4a40-89bc-7224726ca1aa-kube-api-access-j2mw4\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.282009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4108e6ff-21af-4a40-89bc-7224726ca1aa","Type":"ContainerDied","Data":"1e060020e8e18834953cdeaa3fe35dd9119cc8a05edfec10340c6a51094dbe6e"} Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.283175 4796 scope.go:117] "RemoveContainer" containerID="58df70f7d2e435b5820a91a302a03991470ad7d29853160cc7b5ce33ada5febc" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.282049 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.334399 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.359364 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.396628 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:48:29 crc kubenswrapper[4796]: E1125 14:48:29.397063 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4108e6ff-21af-4a40-89bc-7224726ca1aa" containerName="nova-scheduler-scheduler" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.397086 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4108e6ff-21af-4a40-89bc-7224726ca1aa" containerName="nova-scheduler-scheduler" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.397283 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4108e6ff-21af-4a40-89bc-7224726ca1aa" containerName="nova-scheduler-scheduler" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.398112 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.402626 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.405515 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.533198 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-config-data\") pod \"nova-scheduler-0\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.533645 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.533785 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqblh\" (UniqueName: \"kubernetes.io/projected/22d80946-e077-4789-8c1f-f67180e2fb9f-kube-api-access-qqblh\") pod \"nova-scheduler-0\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.635783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-config-data\") pod \"nova-scheduler-0\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.635893 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.635953 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqblh\" (UniqueName: \"kubernetes.io/projected/22d80946-e077-4789-8c1f-f67180e2fb9f-kube-api-access-qqblh\") pod \"nova-scheduler-0\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.642420 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-config-data\") pod \"nova-scheduler-0\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.642736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.662169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqblh\" (UniqueName: \"kubernetes.io/projected/22d80946-e077-4789-8c1f-f67180e2fb9f-kube-api-access-qqblh\") pod \"nova-scheduler-0\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.722549 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.908497 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 14:48:29 crc kubenswrapper[4796]: I1125 14:48:29.908563 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.005279 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.144785 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c961c7-a1f1-4e65-a64d-26a1787ee053-logs\") pod \"18c961c7-a1f1-4e65-a64d-26a1787ee053\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.145064 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9248\" (UniqueName: \"kubernetes.io/projected/18c961c7-a1f1-4e65-a64d-26a1787ee053-kube-api-access-h9248\") pod \"18c961c7-a1f1-4e65-a64d-26a1787ee053\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.145098 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-config-data\") pod \"18c961c7-a1f1-4e65-a64d-26a1787ee053\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.145149 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-combined-ca-bundle\") pod \"18c961c7-a1f1-4e65-a64d-26a1787ee053\" (UID: \"18c961c7-a1f1-4e65-a64d-26a1787ee053\") " Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.145747 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c961c7-a1f1-4e65-a64d-26a1787ee053-logs" (OuterVolumeSpecName: "logs") pod "18c961c7-a1f1-4e65-a64d-26a1787ee053" (UID: "18c961c7-a1f1-4e65-a64d-26a1787ee053"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.149300 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c961c7-a1f1-4e65-a64d-26a1787ee053-kube-api-access-h9248" (OuterVolumeSpecName: "kube-api-access-h9248") pod "18c961c7-a1f1-4e65-a64d-26a1787ee053" (UID: "18c961c7-a1f1-4e65-a64d-26a1787ee053"). InnerVolumeSpecName "kube-api-access-h9248". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.173171 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-config-data" (OuterVolumeSpecName: "config-data") pod "18c961c7-a1f1-4e65-a64d-26a1787ee053" (UID: "18c961c7-a1f1-4e65-a64d-26a1787ee053"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.191093 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18c961c7-a1f1-4e65-a64d-26a1787ee053" (UID: "18c961c7-a1f1-4e65-a64d-26a1787ee053"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:30 crc kubenswrapper[4796]: W1125 14:48:30.206116 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d80946_e077_4789_8c1f_f67180e2fb9f.slice/crio-e6c560e866d1d384efac5896beda5d0ca547311aeb733e861a2ae3f93591415e WatchSource:0}: Error finding container e6c560e866d1d384efac5896beda5d0ca547311aeb733e861a2ae3f93591415e: Status 404 returned error can't find the container with id e6c560e866d1d384efac5896beda5d0ca547311aeb733e861a2ae3f93591415e Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.206342 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.247517 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9248\" (UniqueName: \"kubernetes.io/projected/18c961c7-a1f1-4e65-a64d-26a1787ee053-kube-api-access-h9248\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.247559 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.247588 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c961c7-a1f1-4e65-a64d-26a1787ee053-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.247600 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c961c7-a1f1-4e65-a64d-26a1787ee053-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.291701 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22d80946-e077-4789-8c1f-f67180e2fb9f","Type":"ContainerStarted","Data":"e6c560e866d1d384efac5896beda5d0ca547311aeb733e861a2ae3f93591415e"} Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.295167 4796 generic.go:334] "Generic (PLEG): container finished" podID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerID="141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed" exitCode=0 Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.295229 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18c961c7-a1f1-4e65-a64d-26a1787ee053","Type":"ContainerDied","Data":"141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed"} Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.295358 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18c961c7-a1f1-4e65-a64d-26a1787ee053","Type":"ContainerDied","Data":"0ead1b7ed3d9e450a278d9e11feb439314b75af9614b8f40821d834246488fbf"} Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.295242 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.295403 4796 scope.go:117] "RemoveContainer" containerID="141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.338847 4796 scope.go:117] "RemoveContainer" containerID="26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.354962 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.375880 4796 scope.go:117] "RemoveContainer" containerID="141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.376235 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:30 crc kubenswrapper[4796]: E1125 14:48:30.376463 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed\": container with ID starting with 141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed not found: ID does not exist" containerID="141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.376524 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed"} err="failed to get container status \"141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed\": rpc error: code = NotFound desc = could not find container \"141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed\": container with ID starting with 141d313cbbbc894b91231110daa8519db551e91b5f1aa8e9900d3b09329d12ed not found: ID does not exist" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.376555 4796 scope.go:117] "RemoveContainer" containerID="26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a" Nov 25 14:48:30 crc kubenswrapper[4796]: E1125 14:48:30.377090 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a\": container with ID starting with 26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a not found: ID does not exist" containerID="26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.377136 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a"} err="failed to get container status \"26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a\": rpc error: code = NotFound desc = could not find container \"26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a\": container with ID starting with 26f97120aade152816f55057f5ef6e108de5d93d06a1052864268e000f57664a not found: ID does not exist" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.389658 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:30 crc kubenswrapper[4796]: E1125 14:48:30.390140 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-api" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.390158 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-api" Nov 25 14:48:30 crc kubenswrapper[4796]: E1125 14:48:30.390181 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-log" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.390187 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-log" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.390379 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-api" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.390402 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" containerName="nova-api-log" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.391623 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.397324 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.400519 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.426050 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c961c7-a1f1-4e65-a64d-26a1787ee053" path="/var/lib/kubelet/pods/18c961c7-a1f1-4e65-a64d-26a1787ee053/volumes" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.426693 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4108e6ff-21af-4a40-89bc-7224726ca1aa" path="/var/lib/kubelet/pods/4108e6ff-21af-4a40-89bc-7224726ca1aa/volumes" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.451489 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-config-data\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.451550 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.451923 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7vm\" (UniqueName: \"kubernetes.io/projected/53a1b699-a865-4992-91be-1b725faa49a6-kube-api-access-pm7vm\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.452009 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1b699-a865-4992-91be-1b725faa49a6-logs\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.552974 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm7vm\" (UniqueName: \"kubernetes.io/projected/53a1b699-a865-4992-91be-1b725faa49a6-kube-api-access-pm7vm\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.553040 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1b699-a865-4992-91be-1b725faa49a6-logs\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.553105 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-config-data\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.553125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.553762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1b699-a865-4992-91be-1b725faa49a6-logs\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.559159 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-config-data\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.559442 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.574467 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm7vm\" (UniqueName: \"kubernetes.io/projected/53a1b699-a865-4992-91be-1b725faa49a6-kube-api-access-pm7vm\") pod \"nova-api-0\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " pod="openstack/nova-api-0" Nov 25 14:48:30 crc kubenswrapper[4796]: I1125 14:48:30.724221 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:31 crc kubenswrapper[4796]: I1125 14:48:31.242802 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:31 crc kubenswrapper[4796]: I1125 14:48:31.311045 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53a1b699-a865-4992-91be-1b725faa49a6","Type":"ContainerStarted","Data":"1470db05f804453c9d199827f1f6a2453dca5d86a8e12ac707d0ab784d15bf01"} Nov 25 14:48:31 crc kubenswrapper[4796]: I1125 14:48:31.314107 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22d80946-e077-4789-8c1f-f67180e2fb9f","Type":"ContainerStarted","Data":"7e7e59f3940028bd502dd32fb7a03d57a1c94cc8bd282b359b9c9bd1f5ab5651"} Nov 25 14:48:31 crc kubenswrapper[4796]: I1125 14:48:31.342931 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.342911576 podStartE2EDuration="2.342911576s" podCreationTimestamp="2025-11-25 14:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:31.335410311 +0000 UTC m=+1439.678519755" watchObservedRunningTime="2025-11-25 14:48:31.342911576 +0000 UTC m=+1439.686021000" Nov 25 14:48:32 crc kubenswrapper[4796]: I1125 14:48:32.327558 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53a1b699-a865-4992-91be-1b725faa49a6","Type":"ContainerStarted","Data":"b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd"} Nov 25 14:48:32 crc kubenswrapper[4796]: I1125 14:48:32.329506 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53a1b699-a865-4992-91be-1b725faa49a6","Type":"ContainerStarted","Data":"0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850"} Nov 25 14:48:32 crc kubenswrapper[4796]: I1125 14:48:32.373446 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.373396418 podStartE2EDuration="2.373396418s" podCreationTimestamp="2025-11-25 14:48:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:32.349917262 +0000 UTC m=+1440.693026686" watchObservedRunningTime="2025-11-25 14:48:32.373396418 +0000 UTC m=+1440.716505882" Nov 25 14:48:34 crc kubenswrapper[4796]: I1125 14:48:34.578544 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 14:48:34 crc kubenswrapper[4796]: I1125 14:48:34.723123 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 14:48:34 crc kubenswrapper[4796]: I1125 14:48:34.908058 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 14:48:34 crc kubenswrapper[4796]: I1125 14:48:34.908114 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 14:48:35 crc kubenswrapper[4796]: I1125 14:48:35.920932 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 14:48:35 crc kubenswrapper[4796]: I1125 14:48:35.920981 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 14:48:36 crc kubenswrapper[4796]: I1125 14:48:36.718521 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 14:48:36 crc kubenswrapper[4796]: I1125 14:48:36.948831 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zbqbg" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="registry-server" probeResult="failure" output=< Nov 25 14:48:36 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Nov 25 14:48:36 crc kubenswrapper[4796]: > Nov 25 14:48:39 crc kubenswrapper[4796]: I1125 14:48:39.722848 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 14:48:39 crc kubenswrapper[4796]: I1125 14:48:39.769988 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 14:48:40 crc kubenswrapper[4796]: I1125 14:48:40.457762 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 14:48:40 crc kubenswrapper[4796]: I1125 14:48:40.724840 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 14:48:40 crc kubenswrapper[4796]: I1125 14:48:40.724893 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 14:48:41 crc kubenswrapper[4796]: I1125 14:48:41.807870 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 14:48:41 crc kubenswrapper[4796]: I1125 14:48:41.808052 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 14:48:44 crc kubenswrapper[4796]: I1125 14:48:44.915921 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 14:48:44 crc kubenswrapper[4796]: I1125 14:48:44.921402 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 14:48:44 crc kubenswrapper[4796]: I1125 14:48:44.925418 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.457025 4796 generic.go:334] "Generic (PLEG): container finished" podID="7d488015-7c7f-4601-a332-580819e6e571" containerID="a415fcb8a66ddea15366e1b0a06ead28f09f623219e55d09bfb932aacb4de794" exitCode=137 Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.457208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7d488015-7c7f-4601-a332-580819e6e571","Type":"ContainerDied","Data":"a415fcb8a66ddea15366e1b0a06ead28f09f623219e55d09bfb932aacb4de794"} Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.458570 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7d488015-7c7f-4601-a332-580819e6e571","Type":"ContainerDied","Data":"931fe2549977de5714a3ae15453b078b8fb3235e74070c7709a224329f97e24e"} Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.458628 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="931fe2549977de5714a3ae15453b078b8fb3235e74070c7709a224329f97e24e" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.462594 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.470710 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.661447 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-config-data\") pod \"7d488015-7c7f-4601-a332-580819e6e571\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.661654 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-combined-ca-bundle\") pod \"7d488015-7c7f-4601-a332-580819e6e571\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.661694 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nc55\" (UniqueName: \"kubernetes.io/projected/7d488015-7c7f-4601-a332-580819e6e571-kube-api-access-2nc55\") pod \"7d488015-7c7f-4601-a332-580819e6e571\" (UID: \"7d488015-7c7f-4601-a332-580819e6e571\") " Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.666879 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d488015-7c7f-4601-a332-580819e6e571-kube-api-access-2nc55" (OuterVolumeSpecName: "kube-api-access-2nc55") pod "7d488015-7c7f-4601-a332-580819e6e571" (UID: "7d488015-7c7f-4601-a332-580819e6e571"). InnerVolumeSpecName "kube-api-access-2nc55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.694353 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-config-data" (OuterVolumeSpecName: "config-data") pod "7d488015-7c7f-4601-a332-580819e6e571" (UID: "7d488015-7c7f-4601-a332-580819e6e571"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.695312 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d488015-7c7f-4601-a332-580819e6e571" (UID: "7d488015-7c7f-4601-a332-580819e6e571"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.763305 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.763340 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d488015-7c7f-4601-a332-580819e6e571-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.763354 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nc55\" (UniqueName: \"kubernetes.io/projected/7d488015-7c7f-4601-a332-580819e6e571-kube-api-access-2nc55\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.949639 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:48:45 crc kubenswrapper[4796]: I1125 14:48:45.996315 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.182343 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbqbg"] Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.466234 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.496457 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.523065 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.542988 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 14:48:46 crc kubenswrapper[4796]: E1125 14:48:46.543519 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d488015-7c7f-4601-a332-580819e6e571" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.543541 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d488015-7c7f-4601-a332-580819e6e571" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.543828 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d488015-7c7f-4601-a332-580819e6e571" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.544503 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.544610 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.546922 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.547061 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.547312 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.679897 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.680017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.680044 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.680076 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvv8m\" (UniqueName: \"kubernetes.io/projected/a14facfc-22d1-4b36-a006-23af447aef93-kube-api-access-wvv8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.680103 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.784019 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.784206 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.784243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.784293 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvv8m\" (UniqueName: \"kubernetes.io/projected/a14facfc-22d1-4b36-a006-23af447aef93-kube-api-access-wvv8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.784328 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.792793 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.795366 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.796434 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.797462 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a14facfc-22d1-4b36-a006-23af447aef93-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.814372 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvv8m\" (UniqueName: \"kubernetes.io/projected/a14facfc-22d1-4b36-a006-23af447aef93-kube-api-access-wvv8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"a14facfc-22d1-4b36-a006-23af447aef93\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:46 crc kubenswrapper[4796]: I1125 14:48:46.866978 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:47 crc kubenswrapper[4796]: I1125 14:48:47.198611 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 14:48:47 crc kubenswrapper[4796]: I1125 14:48:47.478242 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a14facfc-22d1-4b36-a006-23af447aef93","Type":"ContainerStarted","Data":"20cd5b2c85d8c34cd5085b78da1e2527084bcd45e12fe83fc1f16d6882cd2aba"} Nov 25 14:48:47 crc kubenswrapper[4796]: I1125 14:48:47.478291 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a14facfc-22d1-4b36-a006-23af447aef93","Type":"ContainerStarted","Data":"06d4412e7883012ecd05d8bd05dc9a2201ae5e89f1467c93caf675a0b3ab85bd"} Nov 25 14:48:47 crc kubenswrapper[4796]: I1125 14:48:47.479067 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zbqbg" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="registry-server" containerID="cri-o://52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296" gracePeriod=2 Nov 25 14:48:47 crc kubenswrapper[4796]: I1125 14:48:47.510447 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.510428118 podStartE2EDuration="1.510428118s" podCreationTimestamp="2025-11-25 14:48:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:47.49739413 +0000 UTC m=+1455.840503554" watchObservedRunningTime="2025-11-25 14:48:47.510428118 +0000 UTC m=+1455.853537542" Nov 25 14:48:47 crc kubenswrapper[4796]: I1125 14:48:47.923144 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.109342 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-catalog-content\") pod \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.109413 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgk6d\" (UniqueName: \"kubernetes.io/projected/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-kube-api-access-pgk6d\") pod \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.109646 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-utilities\") pod \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\" (UID: \"b13c00ac-c5a5-413c-8df6-a1b7111a87a3\") " Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.110272 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-utilities" (OuterVolumeSpecName: "utilities") pod "b13c00ac-c5a5-413c-8df6-a1b7111a87a3" (UID: "b13c00ac-c5a5-413c-8df6-a1b7111a87a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.121451 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-kube-api-access-pgk6d" (OuterVolumeSpecName: "kube-api-access-pgk6d") pod "b13c00ac-c5a5-413c-8df6-a1b7111a87a3" (UID: "b13c00ac-c5a5-413c-8df6-a1b7111a87a3"). InnerVolumeSpecName "kube-api-access-pgk6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.206706 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b13c00ac-c5a5-413c-8df6-a1b7111a87a3" (UID: "b13c00ac-c5a5-413c-8df6-a1b7111a87a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.212499 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.212547 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgk6d\" (UniqueName: \"kubernetes.io/projected/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-kube-api-access-pgk6d\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.212560 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13c00ac-c5a5-413c-8df6-a1b7111a87a3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.421752 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d488015-7c7f-4601-a332-580819e6e571" path="/var/lib/kubelet/pods/7d488015-7c7f-4601-a332-580819e6e571/volumes" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.489478 4796 generic.go:334] "Generic (PLEG): container finished" podID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerID="52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296" exitCode=0 Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.489594 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbqbg" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.489565 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbqbg" event={"ID":"b13c00ac-c5a5-413c-8df6-a1b7111a87a3","Type":"ContainerDied","Data":"52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296"} Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.489645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbqbg" event={"ID":"b13c00ac-c5a5-413c-8df6-a1b7111a87a3","Type":"ContainerDied","Data":"cfa51c9693c404f618b97e3b2810e972ef4906d4a8bfaa225b77feeb85a650cf"} Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.489666 4796 scope.go:117] "RemoveContainer" containerID="52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.515416 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbqbg"] Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.516422 4796 scope.go:117] "RemoveContainer" containerID="d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.523493 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zbqbg"] Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.534229 4796 scope.go:117] "RemoveContainer" containerID="33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.584566 4796 scope.go:117] "RemoveContainer" containerID="52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296" Nov 25 14:48:48 crc kubenswrapper[4796]: E1125 14:48:48.584964 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296\": container with ID starting with 52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296 not found: ID does not exist" containerID="52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.585008 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296"} err="failed to get container status \"52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296\": rpc error: code = NotFound desc = could not find container \"52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296\": container with ID starting with 52b951187fea403d0a6c92c0d2ef573501c015dbfc6b66707fba528e940ec296 not found: ID does not exist" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.585034 4796 scope.go:117] "RemoveContainer" containerID="d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c" Nov 25 14:48:48 crc kubenswrapper[4796]: E1125 14:48:48.585468 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c\": container with ID starting with d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c not found: ID does not exist" containerID="d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.585497 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c"} err="failed to get container status \"d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c\": rpc error: code = NotFound desc = could not find container \"d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c\": container with ID starting with d1cbeca4dc5668f48ef290e4378eca931384ffb1ccd4cc2eb2d6c82202bc610c not found: ID does not exist" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.585516 4796 scope.go:117] "RemoveContainer" containerID="33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1" Nov 25 14:48:48 crc kubenswrapper[4796]: E1125 14:48:48.585821 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1\": container with ID starting with 33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1 not found: ID does not exist" containerID="33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1" Nov 25 14:48:48 crc kubenswrapper[4796]: I1125 14:48:48.585874 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1"} err="failed to get container status \"33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1\": rpc error: code = NotFound desc = could not find container \"33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1\": container with ID starting with 33a1aff0d959069f87f792ea5a2b96de2ff2ecd9e11746249914c086d10e4af1 not found: ID does not exist" Nov 25 14:48:50 crc kubenswrapper[4796]: I1125 14:48:50.421832 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" path="/var/lib/kubelet/pods/b13c00ac-c5a5-413c-8df6-a1b7111a87a3/volumes" Nov 25 14:48:50 crc kubenswrapper[4796]: I1125 14:48:50.728609 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 14:48:50 crc kubenswrapper[4796]: I1125 14:48:50.729184 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 14:48:50 crc kubenswrapper[4796]: I1125 14:48:50.730014 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 14:48:50 crc kubenswrapper[4796]: I1125 14:48:50.731079 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.518657 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.522227 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.706615 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rwcsm"] Nov 25 14:48:51 crc kubenswrapper[4796]: E1125 14:48:51.710017 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="registry-server" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.713100 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="registry-server" Nov 25 14:48:51 crc kubenswrapper[4796]: E1125 14:48:51.713195 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="extract-utilities" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.713202 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="extract-utilities" Nov 25 14:48:51 crc kubenswrapper[4796]: E1125 14:48:51.713221 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="extract-content" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.713227 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="extract-content" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.713945 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13c00ac-c5a5-413c-8df6-a1b7111a87a3" containerName="registry-server" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.721371 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.738923 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rwcsm"] Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.789902 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.789977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.790098 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98k2x\" (UniqueName: \"kubernetes.io/projected/a5fc9195-8dc5-407c-9d3b-67b134be75f3-kube-api-access-98k2x\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.790142 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.790205 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.790367 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-config\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.867392 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.891977 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98k2x\" (UniqueName: \"kubernetes.io/projected/a5fc9195-8dc5-407c-9d3b-67b134be75f3-kube-api-access-98k2x\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.892041 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.892125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.892209 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-config\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.892368 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.892423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.893255 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.893456 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.893489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.893507 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.893536 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-config\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:51 crc kubenswrapper[4796]: I1125 14:48:51.916410 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98k2x\" (UniqueName: \"kubernetes.io/projected/a5fc9195-8dc5-407c-9d3b-67b134be75f3-kube-api-access-98k2x\") pod \"dnsmasq-dns-89c5cd4d5-rwcsm\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:52 crc kubenswrapper[4796]: I1125 14:48:52.053456 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:52 crc kubenswrapper[4796]: I1125 14:48:52.393849 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rwcsm"] Nov 25 14:48:52 crc kubenswrapper[4796]: W1125 14:48:52.394275 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5fc9195_8dc5_407c_9d3b_67b134be75f3.slice/crio-36598fe32fbb33179bbacf6a1872027c150bc02fd948e95562ddfb9a9a4bdd5c WatchSource:0}: Error finding container 36598fe32fbb33179bbacf6a1872027c150bc02fd948e95562ddfb9a9a4bdd5c: Status 404 returned error can't find the container with id 36598fe32fbb33179bbacf6a1872027c150bc02fd948e95562ddfb9a9a4bdd5c Nov 25 14:48:52 crc kubenswrapper[4796]: I1125 14:48:52.534377 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" event={"ID":"a5fc9195-8dc5-407c-9d3b-67b134be75f3","Type":"ContainerStarted","Data":"36598fe32fbb33179bbacf6a1872027c150bc02fd948e95562ddfb9a9a4bdd5c"} Nov 25 14:48:53 crc kubenswrapper[4796]: I1125 14:48:53.545562 4796 generic.go:334] "Generic (PLEG): container finished" podID="a5fc9195-8dc5-407c-9d3b-67b134be75f3" containerID="9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d" exitCode=0 Nov 25 14:48:53 crc kubenswrapper[4796]: I1125 14:48:53.545738 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" event={"ID":"a5fc9195-8dc5-407c-9d3b-67b134be75f3","Type":"ContainerDied","Data":"9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d"} Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.091871 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.092777 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="proxy-httpd" containerID="cri-o://19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f" gracePeriod=30 Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.093302 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="sg-core" containerID="cri-o://bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948" gracePeriod=30 Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.093538 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="ceilometer-notification-agent" containerID="cri-o://6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8" gracePeriod=30 Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.093560 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="ceilometer-central-agent" containerID="cri-o://90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12" gracePeriod=30 Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.561012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" event={"ID":"a5fc9195-8dc5-407c-9d3b-67b134be75f3","Type":"ContainerStarted","Data":"98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec"} Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.561157 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.567702 4796 generic.go:334] "Generic (PLEG): container finished" podID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerID="19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f" exitCode=0 Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.567735 4796 generic.go:334] "Generic (PLEG): container finished" podID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerID="bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948" exitCode=2 Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.567745 4796 generic.go:334] "Generic (PLEG): container finished" podID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerID="90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12" exitCode=0 Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.567770 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerDied","Data":"19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f"} Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.567801 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerDied","Data":"bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948"} Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.567815 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerDied","Data":"90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12"} Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.575447 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.575927 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-log" containerID="cri-o://0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850" gracePeriod=30 Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.576000 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-api" containerID="cri-o://b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd" gracePeriod=30 Nov 25 14:48:54 crc kubenswrapper[4796]: I1125 14:48:54.586097 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" podStartSLOduration=3.5860798689999998 podStartE2EDuration="3.586079869s" podCreationTimestamp="2025-11-25 14:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:54.582195826 +0000 UTC m=+1462.925305260" watchObservedRunningTime="2025-11-25 14:48:54.586079869 +0000 UTC m=+1462.929189293" Nov 25 14:48:55 crc kubenswrapper[4796]: I1125 14:48:55.584726 4796 generic.go:334] "Generic (PLEG): container finished" podID="53a1b699-a865-4992-91be-1b725faa49a6" containerID="0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850" exitCode=143 Nov 25 14:48:55 crc kubenswrapper[4796]: I1125 14:48:55.584799 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53a1b699-a865-4992-91be-1b725faa49a6","Type":"ContainerDied","Data":"0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850"} Nov 25 14:48:56 crc kubenswrapper[4796]: I1125 14:48:56.867687 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:56 crc kubenswrapper[4796]: I1125 14:48:56.884991 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:57 crc kubenswrapper[4796]: I1125 14:48:57.630293 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 14:48:57 crc kubenswrapper[4796]: I1125 14:48:57.869427 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-j7tcp"] Nov 25 14:48:57 crc kubenswrapper[4796]: I1125 14:48:57.872499 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:57 crc kubenswrapper[4796]: I1125 14:48:57.875799 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 14:48:57 crc kubenswrapper[4796]: I1125 14:48:57.876373 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 14:48:57 crc kubenswrapper[4796]: I1125 14:48:57.879954 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7tcp"] Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.006253 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4nj\" (UniqueName: \"kubernetes.io/projected/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-kube-api-access-gn4nj\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.006315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-config-data\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.006419 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-scripts\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.006490 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.108584 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4nj\" (UniqueName: \"kubernetes.io/projected/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-kube-api-access-gn4nj\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.108651 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-config-data\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.108722 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-scripts\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.108777 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.116346 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-scripts\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.116426 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.118960 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-config-data\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.125047 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4nj\" (UniqueName: \"kubernetes.io/projected/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-kube-api-access-gn4nj\") pod \"nova-cell1-cell-mapping-j7tcp\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.194710 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.208374 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.227855 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.312417 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-run-httpd\") pod \"d778cf84-10a7-49cf-be96-d14d18e960e0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.312604 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-ceilometer-tls-certs\") pod \"d778cf84-10a7-49cf-be96-d14d18e960e0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.312632 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-log-httpd\") pod \"d778cf84-10a7-49cf-be96-d14d18e960e0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.312688 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-config-data\") pod \"d778cf84-10a7-49cf-be96-d14d18e960e0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.312761 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-898gx\" (UniqueName: \"kubernetes.io/projected/d778cf84-10a7-49cf-be96-d14d18e960e0-kube-api-access-898gx\") pod \"d778cf84-10a7-49cf-be96-d14d18e960e0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.312757 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d778cf84-10a7-49cf-be96-d14d18e960e0" (UID: "d778cf84-10a7-49cf-be96-d14d18e960e0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.312798 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-scripts\") pod \"d778cf84-10a7-49cf-be96-d14d18e960e0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.312844 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-combined-ca-bundle\") pod \"d778cf84-10a7-49cf-be96-d14d18e960e0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.312871 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-sg-core-conf-yaml\") pod \"d778cf84-10a7-49cf-be96-d14d18e960e0\" (UID: \"d778cf84-10a7-49cf-be96-d14d18e960e0\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.313226 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.313208 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d778cf84-10a7-49cf-be96-d14d18e960e0" (UID: "d778cf84-10a7-49cf-be96-d14d18e960e0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.318933 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-scripts" (OuterVolumeSpecName: "scripts") pod "d778cf84-10a7-49cf-be96-d14d18e960e0" (UID: "d778cf84-10a7-49cf-be96-d14d18e960e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.321698 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d778cf84-10a7-49cf-be96-d14d18e960e0-kube-api-access-898gx" (OuterVolumeSpecName: "kube-api-access-898gx") pod "d778cf84-10a7-49cf-be96-d14d18e960e0" (UID: "d778cf84-10a7-49cf-be96-d14d18e960e0"). InnerVolumeSpecName "kube-api-access-898gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.350116 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d778cf84-10a7-49cf-be96-d14d18e960e0" (UID: "d778cf84-10a7-49cf-be96-d14d18e960e0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.414846 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1b699-a865-4992-91be-1b725faa49a6-logs\") pod \"53a1b699-a865-4992-91be-1b725faa49a6\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.414901 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-config-data\") pod \"53a1b699-a865-4992-91be-1b725faa49a6\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.414989 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm7vm\" (UniqueName: \"kubernetes.io/projected/53a1b699-a865-4992-91be-1b725faa49a6-kube-api-access-pm7vm\") pod \"53a1b699-a865-4992-91be-1b725faa49a6\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.415093 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-combined-ca-bundle\") pod \"53a1b699-a865-4992-91be-1b725faa49a6\" (UID: \"53a1b699-a865-4992-91be-1b725faa49a6\") " Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.415608 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.415627 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d778cf84-10a7-49cf-be96-d14d18e960e0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.415641 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-898gx\" (UniqueName: \"kubernetes.io/projected/d778cf84-10a7-49cf-be96-d14d18e960e0-kube-api-access-898gx\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.415654 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.419831 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53a1b699-a865-4992-91be-1b725faa49a6-logs" (OuterVolumeSpecName: "logs") pod "53a1b699-a865-4992-91be-1b725faa49a6" (UID: "53a1b699-a865-4992-91be-1b725faa49a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.421149 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d778cf84-10a7-49cf-be96-d14d18e960e0" (UID: "d778cf84-10a7-49cf-be96-d14d18e960e0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.426471 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a1b699-a865-4992-91be-1b725faa49a6-kube-api-access-pm7vm" (OuterVolumeSpecName: "kube-api-access-pm7vm") pod "53a1b699-a865-4992-91be-1b725faa49a6" (UID: "53a1b699-a865-4992-91be-1b725faa49a6"). InnerVolumeSpecName "kube-api-access-pm7vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.454043 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d778cf84-10a7-49cf-be96-d14d18e960e0" (UID: "d778cf84-10a7-49cf-be96-d14d18e960e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.458226 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-config-data" (OuterVolumeSpecName: "config-data") pod "53a1b699-a865-4992-91be-1b725faa49a6" (UID: "53a1b699-a865-4992-91be-1b725faa49a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.464779 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-config-data" (OuterVolumeSpecName: "config-data") pod "d778cf84-10a7-49cf-be96-d14d18e960e0" (UID: "d778cf84-10a7-49cf-be96-d14d18e960e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.472792 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53a1b699-a865-4992-91be-1b725faa49a6" (UID: "53a1b699-a865-4992-91be-1b725faa49a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.517933 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.517970 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1b699-a865-4992-91be-1b725faa49a6-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.517982 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.517993 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.518005 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm7vm\" (UniqueName: \"kubernetes.io/projected/53a1b699-a865-4992-91be-1b725faa49a6-kube-api-access-pm7vm\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.518017 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d778cf84-10a7-49cf-be96-d14d18e960e0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.518027 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1b699-a865-4992-91be-1b725faa49a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.627349 4796 generic.go:334] "Generic (PLEG): container finished" podID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerID="6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8" exitCode=0 Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.627436 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerDied","Data":"6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8"} Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.627486 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d778cf84-10a7-49cf-be96-d14d18e960e0","Type":"ContainerDied","Data":"f1d8a8b07cb2d81ef35dd21e243f1ff8dcd2da3c188cd2e01952b9410c9fd74e"} Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.627470 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.627505 4796 scope.go:117] "RemoveContainer" containerID="19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.631597 4796 generic.go:334] "Generic (PLEG): container finished" podID="53a1b699-a865-4992-91be-1b725faa49a6" containerID="b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd" exitCode=0 Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.631666 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53a1b699-a865-4992-91be-1b725faa49a6","Type":"ContainerDied","Data":"b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd"} Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.631695 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.631704 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53a1b699-a865-4992-91be-1b725faa49a6","Type":"ContainerDied","Data":"1470db05f804453c9d199827f1f6a2453dca5d86a8e12ac707d0ab784d15bf01"} Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.664277 4796 scope.go:117] "RemoveContainer" containerID="bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.666521 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.693988 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.708998 4796 scope.go:117] "RemoveContainer" containerID="6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8" Nov 25 14:48:58 crc kubenswrapper[4796]: W1125 14:48:58.709314 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cd58711_9b15_46ab_b8bf_98c0a3916fd3.slice/crio-b6058b9eafd22a428747cfea4e94853767f7d5e50a67bc4e18397fa968146c8e WatchSource:0}: Error finding container b6058b9eafd22a428747cfea4e94853767f7d5e50a67bc4e18397fa968146c8e: Status 404 returned error can't find the container with id b6058b9eafd22a428747cfea4e94853767f7d5e50a67bc4e18397fa968146c8e Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.724648 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.734963 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.757662 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.758256 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="proxy-httpd" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758284 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="proxy-httpd" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.758309 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="ceilometer-notification-agent" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758317 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="ceilometer-notification-agent" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.758331 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-api" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758341 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-api" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.758365 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-log" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758373 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-log" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.758387 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="sg-core" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758395 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="sg-core" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.758408 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="ceilometer-central-agent" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758415 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="ceilometer-central-agent" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758689 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-log" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758707 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="proxy-httpd" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758721 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a1b699-a865-4992-91be-1b725faa49a6" containerName="nova-api-api" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758742 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="sg-core" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758758 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="ceilometer-central-agent" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.758770 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" containerName="ceilometer-notification-agent" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.760951 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.766520 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.766789 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.766947 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.771811 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7tcp"] Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.789264 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.799186 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.800922 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.802622 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.802873 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.803028 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.808029 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.882139 4796 scope.go:117] "RemoveContainer" containerID="90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.902291 4796 scope.go:117] "RemoveContainer" containerID="19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.902963 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f\": container with ID starting with 19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f not found: ID does not exist" containerID="19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.903001 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f"} err="failed to get container status \"19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f\": rpc error: code = NotFound desc = could not find container \"19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f\": container with ID starting with 19cd220a64c4b193dda155587ac1c1e9a5e1a0f52d14a06436c0d2d620529f2f not found: ID does not exist" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.903026 4796 scope.go:117] "RemoveContainer" containerID="bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.903272 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948\": container with ID starting with bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948 not found: ID does not exist" containerID="bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.903302 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948"} err="failed to get container status \"bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948\": rpc error: code = NotFound desc = could not find container \"bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948\": container with ID starting with bc39447c9ba005826ac51690eac012facc2b65e31b9fb195500062d7f7920948 not found: ID does not exist" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.903319 4796 scope.go:117] "RemoveContainer" containerID="6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.903617 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8\": container with ID starting with 6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8 not found: ID does not exist" containerID="6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.903649 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8"} err="failed to get container status \"6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8\": rpc error: code = NotFound desc = could not find container \"6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8\": container with ID starting with 6211ed8f64f386d07abdabfc5df303947a85ead27bf560d7d5324080262cd8f8 not found: ID does not exist" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.903667 4796 scope.go:117] "RemoveContainer" containerID="90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.903983 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12\": container with ID starting with 90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12 not found: ID does not exist" containerID="90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.904014 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12"} err="failed to get container status \"90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12\": rpc error: code = NotFound desc = could not find container \"90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12\": container with ID starting with 90132cf5aca020aa130b5f67145e856a25a827baf21bcee6c26d59786b1dab12 not found: ID does not exist" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.904031 4796 scope.go:117] "RemoveContainer" containerID="b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928125 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-logs\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928186 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37724a0c-3784-401a-8214-3dcb37d2ce4f-run-httpd\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928212 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928410 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37724a0c-3784-401a-8214-3dcb37d2ce4f-log-httpd\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928505 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-config-data\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928549 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-config-data\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928684 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928745 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-scripts\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928858 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928909 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-public-tls-certs\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.928946 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.929002 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tc9b\" (UniqueName: \"kubernetes.io/projected/37724a0c-3784-401a-8214-3dcb37d2ce4f-kube-api-access-7tc9b\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.929220 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm7px\" (UniqueName: \"kubernetes.io/projected/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-kube-api-access-qm7px\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.929256 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.929323 4796 scope.go:117] "RemoveContainer" containerID="0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.952021 4796 scope.go:117] "RemoveContainer" containerID="b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.952608 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd\": container with ID starting with b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd not found: ID does not exist" containerID="b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.952652 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd"} err="failed to get container status \"b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd\": rpc error: code = NotFound desc = could not find container \"b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd\": container with ID starting with b3c063bdbfc5fe8377182b547e9858b80108234a22fee460b4af69787f03d2cd not found: ID does not exist" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.952682 4796 scope.go:117] "RemoveContainer" containerID="0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850" Nov 25 14:48:58 crc kubenswrapper[4796]: E1125 14:48:58.953158 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850\": container with ID starting with 0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850 not found: ID does not exist" containerID="0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850" Nov 25 14:48:58 crc kubenswrapper[4796]: I1125 14:48:58.953195 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850"} err="failed to get container status \"0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850\": rpc error: code = NotFound desc = could not find container \"0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850\": container with ID starting with 0777a4d2e80f3c8d83d22859ffd355c74ed1f22543ea2d4b7c0b748206cb1850 not found: ID does not exist" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.031533 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37724a0c-3784-401a-8214-3dcb37d2ce4f-log-httpd\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.031651 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-config-data\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.032375 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37724a0c-3784-401a-8214-3dcb37d2ce4f-log-httpd\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.032411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-config-data\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.032482 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.032513 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-scripts\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.032558 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.032607 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-public-tls-certs\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.032656 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.033031 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tc9b\" (UniqueName: \"kubernetes.io/projected/37724a0c-3784-401a-8214-3dcb37d2ce4f-kube-api-access-7tc9b\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.033107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm7px\" (UniqueName: \"kubernetes.io/projected/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-kube-api-access-qm7px\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.033141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.033196 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-logs\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.033241 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37724a0c-3784-401a-8214-3dcb37d2ce4f-run-httpd\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.033263 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.034004 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-logs\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.034081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37724a0c-3784-401a-8214-3dcb37d2ce4f-run-httpd\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.036287 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-config-data\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.036733 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.036928 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-scripts\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.037755 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-public-tls-certs\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.038591 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.038864 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.039180 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.039752 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-config-data\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.054194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37724a0c-3784-401a-8214-3dcb37d2ce4f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.058536 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm7px\" (UniqueName: \"kubernetes.io/projected/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-kube-api-access-qm7px\") pod \"nova-api-0\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.059357 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tc9b\" (UniqueName: \"kubernetes.io/projected/37724a0c-3784-401a-8214-3dcb37d2ce4f-kube-api-access-7tc9b\") pod \"ceilometer-0\" (UID: \"37724a0c-3784-401a-8214-3dcb37d2ce4f\") " pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.217960 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.225933 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.653133 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7tcp" event={"ID":"1cd58711-9b15-46ab-b8bf-98c0a3916fd3","Type":"ContainerStarted","Data":"cc9e11f5c8e35863fa82011842073494fe45dbca8f3f4ed06e875bbc51a230a2"} Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.653206 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7tcp" event={"ID":"1cd58711-9b15-46ab-b8bf-98c0a3916fd3","Type":"ContainerStarted","Data":"b6058b9eafd22a428747cfea4e94853767f7d5e50a67bc4e18397fa968146c8e"} Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.703685 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-j7tcp" podStartSLOduration=2.703662948 podStartE2EDuration="2.703662948s" podCreationTimestamp="2025-11-25 14:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:48:59.685477388 +0000 UTC m=+1468.028586822" watchObservedRunningTime="2025-11-25 14:48:59.703662948 +0000 UTC m=+1468.046772412" Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.810631 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:48:59 crc kubenswrapper[4796]: I1125 14:48:59.818320 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 14:49:00 crc kubenswrapper[4796]: I1125 14:49:00.429737 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a1b699-a865-4992-91be-1b725faa49a6" path="/var/lib/kubelet/pods/53a1b699-a865-4992-91be-1b725faa49a6/volumes" Nov 25 14:49:00 crc kubenswrapper[4796]: I1125 14:49:00.432387 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d778cf84-10a7-49cf-be96-d14d18e960e0" path="/var/lib/kubelet/pods/d778cf84-10a7-49cf-be96-d14d18e960e0/volumes" Nov 25 14:49:00 crc kubenswrapper[4796]: I1125 14:49:00.676263 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37724a0c-3784-401a-8214-3dcb37d2ce4f","Type":"ContainerStarted","Data":"951080a6520d092d8417ed98de9aa701b5019d700553f536fc70205f7c4e86a9"} Nov 25 14:49:00 crc kubenswrapper[4796]: I1125 14:49:00.676318 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37724a0c-3784-401a-8214-3dcb37d2ce4f","Type":"ContainerStarted","Data":"d7d5823c44f5ba2b81eb1b1440fe00c4d676ddf4fb1681e856954d4b7d56583f"} Nov 25 14:49:00 crc kubenswrapper[4796]: I1125 14:49:00.678715 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb","Type":"ContainerStarted","Data":"ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9"} Nov 25 14:49:00 crc kubenswrapper[4796]: I1125 14:49:00.678829 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb","Type":"ContainerStarted","Data":"d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a"} Nov 25 14:49:00 crc kubenswrapper[4796]: I1125 14:49:00.678902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb","Type":"ContainerStarted","Data":"4f114d986bb42845b72b0ce82085e6241c494b0ff17ead34f54b08a354b7f178"} Nov 25 14:49:00 crc kubenswrapper[4796]: I1125 14:49:00.701838 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.701814287 podStartE2EDuration="2.701814287s" podCreationTimestamp="2025-11-25 14:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:49:00.696968646 +0000 UTC m=+1469.040078080" watchObservedRunningTime="2025-11-25 14:49:00.701814287 +0000 UTC m=+1469.044923711" Nov 25 14:49:01 crc kubenswrapper[4796]: I1125 14:49:01.692986 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37724a0c-3784-401a-8214-3dcb37d2ce4f","Type":"ContainerStarted","Data":"ff42c6bb8e5b6e9fb8a37dae6ff8dda05926387400cfa52d44844a56099ab880"} Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.056674 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.118339 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ljhfb"] Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.120512 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" podUID="bf4002d8-3c37-4da7-8abc-1c9167a7a275" containerName="dnsmasq-dns" containerID="cri-o://c03ee5fb490755c9b4cc444d5381ccf74fc326170c06df26139faf2ef97af89a" gracePeriod=10 Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.702705 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37724a0c-3784-401a-8214-3dcb37d2ce4f","Type":"ContainerStarted","Data":"70c00517d62f4ddedf7ba5b58881f3f9b489adaeb02e8fd7c8b76ebc689d11a1"} Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.704423 4796 generic.go:334] "Generic (PLEG): container finished" podID="bf4002d8-3c37-4da7-8abc-1c9167a7a275" containerID="c03ee5fb490755c9b4cc444d5381ccf74fc326170c06df26139faf2ef97af89a" exitCode=0 Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.704476 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" event={"ID":"bf4002d8-3c37-4da7-8abc-1c9167a7a275","Type":"ContainerDied","Data":"c03ee5fb490755c9b4cc444d5381ccf74fc326170c06df26139faf2ef97af89a"} Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.704509 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" event={"ID":"bf4002d8-3c37-4da7-8abc-1c9167a7a275","Type":"ContainerDied","Data":"c9f20dc2b85cd3af2f81955535912fbde5a5c537028c7ac118e40145af741ccb"} Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.704528 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9f20dc2b85cd3af2f81955535912fbde5a5c537028c7ac118e40145af741ccb" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.705053 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.814607 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-swift-storage-0\") pod \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.814684 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-sb\") pod \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.814805 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-svc\") pod \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.814882 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-nb\") pod \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.815407 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcdfq\" (UniqueName: \"kubernetes.io/projected/bf4002d8-3c37-4da7-8abc-1c9167a7a275-kube-api-access-bcdfq\") pod \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.815433 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-config\") pod \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\" (UID: \"bf4002d8-3c37-4da7-8abc-1c9167a7a275\") " Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.821495 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4002d8-3c37-4da7-8abc-1c9167a7a275-kube-api-access-bcdfq" (OuterVolumeSpecName: "kube-api-access-bcdfq") pod "bf4002d8-3c37-4da7-8abc-1c9167a7a275" (UID: "bf4002d8-3c37-4da7-8abc-1c9167a7a275"). InnerVolumeSpecName "kube-api-access-bcdfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.874178 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf4002d8-3c37-4da7-8abc-1c9167a7a275" (UID: "bf4002d8-3c37-4da7-8abc-1c9167a7a275"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.875094 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-config" (OuterVolumeSpecName: "config") pod "bf4002d8-3c37-4da7-8abc-1c9167a7a275" (UID: "bf4002d8-3c37-4da7-8abc-1c9167a7a275"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.878118 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf4002d8-3c37-4da7-8abc-1c9167a7a275" (UID: "bf4002d8-3c37-4da7-8abc-1c9167a7a275"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.890464 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bf4002d8-3c37-4da7-8abc-1c9167a7a275" (UID: "bf4002d8-3c37-4da7-8abc-1c9167a7a275"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.904984 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf4002d8-3c37-4da7-8abc-1c9167a7a275" (UID: "bf4002d8-3c37-4da7-8abc-1c9167a7a275"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.918405 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.918449 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.918460 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.918470 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcdfq\" (UniqueName: \"kubernetes.io/projected/bf4002d8-3c37-4da7-8abc-1c9167a7a275-kube-api-access-bcdfq\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.918480 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:02 crc kubenswrapper[4796]: I1125 14:49:02.918488 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf4002d8-3c37-4da7-8abc-1c9167a7a275-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:03 crc kubenswrapper[4796]: I1125 14:49:03.713735 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ljhfb" Nov 25 14:49:03 crc kubenswrapper[4796]: I1125 14:49:03.753924 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ljhfb"] Nov 25 14:49:03 crc kubenswrapper[4796]: I1125 14:49:03.768862 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ljhfb"] Nov 25 14:49:04 crc kubenswrapper[4796]: I1125 14:49:04.426924 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4002d8-3c37-4da7-8abc-1c9167a7a275" path="/var/lib/kubelet/pods/bf4002d8-3c37-4da7-8abc-1c9167a7a275/volumes" Nov 25 14:49:04 crc kubenswrapper[4796]: I1125 14:49:04.727379 4796 generic.go:334] "Generic (PLEG): container finished" podID="1cd58711-9b15-46ab-b8bf-98c0a3916fd3" containerID="cc9e11f5c8e35863fa82011842073494fe45dbca8f3f4ed06e875bbc51a230a2" exitCode=0 Nov 25 14:49:04 crc kubenswrapper[4796]: I1125 14:49:04.727464 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7tcp" event={"ID":"1cd58711-9b15-46ab-b8bf-98c0a3916fd3","Type":"ContainerDied","Data":"cc9e11f5c8e35863fa82011842073494fe45dbca8f3f4ed06e875bbc51a230a2"} Nov 25 14:49:04 crc kubenswrapper[4796]: I1125 14:49:04.733461 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37724a0c-3784-401a-8214-3dcb37d2ce4f","Type":"ContainerStarted","Data":"fe28d8d8244f11cfe44bf6ceadb955f9a573cb51a9592437a331320cf06f6252"} Nov 25 14:49:04 crc kubenswrapper[4796]: I1125 14:49:04.734615 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.101630 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.132314 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.451560529 podStartE2EDuration="8.132288632s" podCreationTimestamp="2025-11-25 14:48:58 +0000 UTC" firstStartedPulling="2025-11-25 14:48:59.815524904 +0000 UTC m=+1468.158634328" lastFinishedPulling="2025-11-25 14:49:03.496253007 +0000 UTC m=+1471.839362431" observedRunningTime="2025-11-25 14:49:04.781639067 +0000 UTC m=+1473.124748501" watchObservedRunningTime="2025-11-25 14:49:06.132288632 +0000 UTC m=+1474.475398056" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.288277 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn4nj\" (UniqueName: \"kubernetes.io/projected/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-kube-api-access-gn4nj\") pod \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.288361 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-scripts\") pod \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.288647 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-combined-ca-bundle\") pod \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.288790 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-config-data\") pod \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\" (UID: \"1cd58711-9b15-46ab-b8bf-98c0a3916fd3\") " Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.295371 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-kube-api-access-gn4nj" (OuterVolumeSpecName: "kube-api-access-gn4nj") pod "1cd58711-9b15-46ab-b8bf-98c0a3916fd3" (UID: "1cd58711-9b15-46ab-b8bf-98c0a3916fd3"). InnerVolumeSpecName "kube-api-access-gn4nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.295965 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-scripts" (OuterVolumeSpecName: "scripts") pod "1cd58711-9b15-46ab-b8bf-98c0a3916fd3" (UID: "1cd58711-9b15-46ab-b8bf-98c0a3916fd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.334733 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cd58711-9b15-46ab-b8bf-98c0a3916fd3" (UID: "1cd58711-9b15-46ab-b8bf-98c0a3916fd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.339314 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-config-data" (OuterVolumeSpecName: "config-data") pod "1cd58711-9b15-46ab-b8bf-98c0a3916fd3" (UID: "1cd58711-9b15-46ab-b8bf-98c0a3916fd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.391830 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.391883 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn4nj\" (UniqueName: \"kubernetes.io/projected/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-kube-api-access-gn4nj\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.391905 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.391926 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd58711-9b15-46ab-b8bf-98c0a3916fd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.755845 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j7tcp" event={"ID":"1cd58711-9b15-46ab-b8bf-98c0a3916fd3","Type":"ContainerDied","Data":"b6058b9eafd22a428747cfea4e94853767f7d5e50a67bc4e18397fa968146c8e"} Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.755872 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j7tcp" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.755886 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6058b9eafd22a428747cfea4e94853767f7d5e50a67bc4e18397fa968146c8e" Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.933943 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.934409 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerName="nova-api-log" containerID="cri-o://d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a" gracePeriod=30 Nov 25 14:49:06 crc kubenswrapper[4796]: I1125 14:49:06.934626 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerName="nova-api-api" containerID="cri-o://ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9" gracePeriod=30 Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.016476 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.016897 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-log" containerID="cri-o://0813a9e9b442965f567f5d4190003390eb2e596b352dbeb05673ffb66ba926b2" gracePeriod=30 Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.017240 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-metadata" containerID="cri-o://1db5422d68c5080078d38372d84c6ea0effe0dfdfb647d99b56eb7278c921f77" gracePeriod=30 Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.032253 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.036095 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="22d80946-e077-4789-8c1f-f67180e2fb9f" containerName="nova-scheduler-scheduler" containerID="cri-o://7e7e59f3940028bd502dd32fb7a03d57a1c94cc8bd282b359b9c9bd1f5ab5651" gracePeriod=30 Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.581662 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.722168 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-config-data\") pod \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.722329 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-combined-ca-bundle\") pod \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.722453 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm7px\" (UniqueName: \"kubernetes.io/projected/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-kube-api-access-qm7px\") pod \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.722491 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-internal-tls-certs\") pod \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.722547 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-logs\") pod \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.722564 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-public-tls-certs\") pod \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\" (UID: \"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb\") " Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.723025 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-logs" (OuterVolumeSpecName: "logs") pod "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" (UID: "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.723293 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.730845 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-kube-api-access-qm7px" (OuterVolumeSpecName: "kube-api-access-qm7px") pod "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" (UID: "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb"). InnerVolumeSpecName "kube-api-access-qm7px". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.753096 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-config-data" (OuterVolumeSpecName: "config-data") pod "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" (UID: "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.755643 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" (UID: "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.768601 4796 generic.go:334] "Generic (PLEG): container finished" podID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerID="ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9" exitCode=0 Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.768643 4796 generic.go:334] "Generic (PLEG): container finished" podID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerID="d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a" exitCode=143 Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.768692 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb","Type":"ContainerDied","Data":"ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9"} Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.768723 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb","Type":"ContainerDied","Data":"d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a"} Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.768736 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb","Type":"ContainerDied","Data":"4f114d986bb42845b72b0ce82085e6241c494b0ff17ead34f54b08a354b7f178"} Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.768757 4796 scope.go:117] "RemoveContainer" containerID="ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.768947 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.775838 4796 generic.go:334] "Generic (PLEG): container finished" podID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerID="0813a9e9b442965f567f5d4190003390eb2e596b352dbeb05673ffb66ba926b2" exitCode=143 Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.775889 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b7411dd4-cc53-4a32-82ea-03b3b51dbd55","Type":"ContainerDied","Data":"0813a9e9b442965f567f5d4190003390eb2e596b352dbeb05673ffb66ba926b2"} Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.794175 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" (UID: "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.794546 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" (UID: "5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.801452 4796 scope.go:117] "RemoveContainer" containerID="d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.824838 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm7px\" (UniqueName: \"kubernetes.io/projected/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-kube-api-access-qm7px\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.825158 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.825168 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.825176 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.825205 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.828341 4796 scope.go:117] "RemoveContainer" containerID="ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9" Nov 25 14:49:07 crc kubenswrapper[4796]: E1125 14:49:07.828860 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9\": container with ID starting with ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9 not found: ID does not exist" containerID="ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.828910 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9"} err="failed to get container status \"ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9\": rpc error: code = NotFound desc = could not find container \"ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9\": container with ID starting with ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9 not found: ID does not exist" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.828936 4796 scope.go:117] "RemoveContainer" containerID="d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a" Nov 25 14:49:07 crc kubenswrapper[4796]: E1125 14:49:07.829243 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a\": container with ID starting with d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a not found: ID does not exist" containerID="d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.829261 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a"} err="failed to get container status \"d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a\": rpc error: code = NotFound desc = could not find container \"d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a\": container with ID starting with d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a not found: ID does not exist" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.829276 4796 scope.go:117] "RemoveContainer" containerID="ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.829519 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9"} err="failed to get container status \"ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9\": rpc error: code = NotFound desc = could not find container \"ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9\": container with ID starting with ad65470f481b8fbbd7d334ab58a8cc31313ac2e9c52ff4850342b3b04d71dfd9 not found: ID does not exist" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.829552 4796 scope.go:117] "RemoveContainer" containerID="d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a" Nov 25 14:49:07 crc kubenswrapper[4796]: I1125 14:49:07.830031 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a"} err="failed to get container status \"d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a\": rpc error: code = NotFound desc = could not find container \"d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a\": container with ID starting with d4233f170df3b23636a7a1175df6eea2192cad26b1f1498585bf43f1eff0d59a not found: ID does not exist" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.158466 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.172686 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.184889 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 14:49:08 crc kubenswrapper[4796]: E1125 14:49:08.185356 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerName="nova-api-log" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.185377 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerName="nova-api-log" Nov 25 14:49:08 crc kubenswrapper[4796]: E1125 14:49:08.185405 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd58711-9b15-46ab-b8bf-98c0a3916fd3" containerName="nova-manage" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.185416 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd58711-9b15-46ab-b8bf-98c0a3916fd3" containerName="nova-manage" Nov 25 14:49:08 crc kubenswrapper[4796]: E1125 14:49:08.185446 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerName="nova-api-api" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.185455 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerName="nova-api-api" Nov 25 14:49:08 crc kubenswrapper[4796]: E1125 14:49:08.185485 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4002d8-3c37-4da7-8abc-1c9167a7a275" containerName="init" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.185493 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4002d8-3c37-4da7-8abc-1c9167a7a275" containerName="init" Nov 25 14:49:08 crc kubenswrapper[4796]: E1125 14:49:08.185506 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4002d8-3c37-4da7-8abc-1c9167a7a275" containerName="dnsmasq-dns" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.185514 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4002d8-3c37-4da7-8abc-1c9167a7a275" containerName="dnsmasq-dns" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.185767 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd58711-9b15-46ab-b8bf-98c0a3916fd3" containerName="nova-manage" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.185786 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerName="nova-api-log" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.185801 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" containerName="nova-api-api" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.185821 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4002d8-3c37-4da7-8abc-1c9167a7a275" containerName="dnsmasq-dns" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.187057 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.195047 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.228062 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.228105 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.228063 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.336267 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-config-data\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.336324 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86950200-06a3-4ad0-9a40-d70deeba8ce3-logs\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.336352 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-public-tls-certs\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.336383 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.336547 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.336630 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj9tx\" (UniqueName: \"kubernetes.io/projected/86950200-06a3-4ad0-9a40-d70deeba8ce3-kube-api-access-pj9tx\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.428735 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb" path="/var/lib/kubelet/pods/5c1fa6dd-6a9b-4c7d-9cc9-e3b7bbe7fbeb/volumes" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.438286 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj9tx\" (UniqueName: \"kubernetes.io/projected/86950200-06a3-4ad0-9a40-d70deeba8ce3-kube-api-access-pj9tx\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.438362 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-config-data\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.438393 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86950200-06a3-4ad0-9a40-d70deeba8ce3-logs\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.438426 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-public-tls-certs\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.438446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.438567 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.441339 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86950200-06a3-4ad0-9a40-d70deeba8ce3-logs\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.443287 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-public-tls-certs\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.443955 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.445022 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-config-data\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.456189 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86950200-06a3-4ad0-9a40-d70deeba8ce3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.466518 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj9tx\" (UniqueName: \"kubernetes.io/projected/86950200-06a3-4ad0-9a40-d70deeba8ce3-kube-api-access-pj9tx\") pod \"nova-api-0\" (UID: \"86950200-06a3-4ad0-9a40-d70deeba8ce3\") " pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.547026 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.789073 4796 generic.go:334] "Generic (PLEG): container finished" podID="22d80946-e077-4789-8c1f-f67180e2fb9f" containerID="7e7e59f3940028bd502dd32fb7a03d57a1c94cc8bd282b359b9c9bd1f5ab5651" exitCode=0 Nov 25 14:49:08 crc kubenswrapper[4796]: I1125 14:49:08.789159 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22d80946-e077-4789-8c1f-f67180e2fb9f","Type":"ContainerDied","Data":"7e7e59f3940028bd502dd32fb7a03d57a1c94cc8bd282b359b9c9bd1f5ab5651"} Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.005968 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.089814 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.162277 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-combined-ca-bundle\") pod \"22d80946-e077-4789-8c1f-f67180e2fb9f\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.162511 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-config-data\") pod \"22d80946-e077-4789-8c1f-f67180e2fb9f\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.162598 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqblh\" (UniqueName: \"kubernetes.io/projected/22d80946-e077-4789-8c1f-f67180e2fb9f-kube-api-access-qqblh\") pod \"22d80946-e077-4789-8c1f-f67180e2fb9f\" (UID: \"22d80946-e077-4789-8c1f-f67180e2fb9f\") " Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.174691 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d80946-e077-4789-8c1f-f67180e2fb9f-kube-api-access-qqblh" (OuterVolumeSpecName: "kube-api-access-qqblh") pod "22d80946-e077-4789-8c1f-f67180e2fb9f" (UID: "22d80946-e077-4789-8c1f-f67180e2fb9f"). InnerVolumeSpecName "kube-api-access-qqblh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.195441 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22d80946-e077-4789-8c1f-f67180e2fb9f" (UID: "22d80946-e077-4789-8c1f-f67180e2fb9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.202354 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-config-data" (OuterVolumeSpecName: "config-data") pod "22d80946-e077-4789-8c1f-f67180e2fb9f" (UID: "22d80946-e077-4789-8c1f-f67180e2fb9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.264498 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.264530 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d80946-e077-4789-8c1f-f67180e2fb9f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.264541 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqblh\" (UniqueName: \"kubernetes.io/projected/22d80946-e077-4789-8c1f-f67180e2fb9f-kube-api-access-qqblh\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.801654 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"86950200-06a3-4ad0-9a40-d70deeba8ce3","Type":"ContainerStarted","Data":"42485594d0f5aa09982013d79c1b45aaa7a23491d74af8e63be4c65cfec536e0"} Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.801945 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"86950200-06a3-4ad0-9a40-d70deeba8ce3","Type":"ContainerStarted","Data":"b3eb3c9c8712d56a82dceb7751469eaeb00bb14a3d80df2b70fdf222ab17137a"} Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.804135 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"22d80946-e077-4789-8c1f-f67180e2fb9f","Type":"ContainerDied","Data":"e6c560e866d1d384efac5896beda5d0ca547311aeb733e861a2ae3f93591415e"} Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.804168 4796 scope.go:117] "RemoveContainer" containerID="7e7e59f3940028bd502dd32fb7a03d57a1c94cc8bd282b359b9c9bd1f5ab5651" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.804245 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.844132 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.852803 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.871468 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:49:09 crc kubenswrapper[4796]: E1125 14:49:09.872148 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d80946-e077-4789-8c1f-f67180e2fb9f" containerName="nova-scheduler-scheduler" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.872183 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d80946-e077-4789-8c1f-f67180e2fb9f" containerName="nova-scheduler-scheduler" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.872569 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d80946-e077-4789-8c1f-f67180e2fb9f" containerName="nova-scheduler-scheduler" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.873540 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.877637 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.884192 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.978633 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40e0fe8-470b-4092-a179-4e4df56f8900-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f40e0fe8-470b-4092-a179-4e4df56f8900\") " pod="openstack/nova-scheduler-0" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.978691 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6vr8\" (UniqueName: \"kubernetes.io/projected/f40e0fe8-470b-4092-a179-4e4df56f8900-kube-api-access-l6vr8\") pod \"nova-scheduler-0\" (UID: \"f40e0fe8-470b-4092-a179-4e4df56f8900\") " pod="openstack/nova-scheduler-0" Nov 25 14:49:09 crc kubenswrapper[4796]: I1125 14:49:09.978727 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40e0fe8-470b-4092-a179-4e4df56f8900-config-data\") pod \"nova-scheduler-0\" (UID: \"f40e0fe8-470b-4092-a179-4e4df56f8900\") " pod="openstack/nova-scheduler-0" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.080117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40e0fe8-470b-4092-a179-4e4df56f8900-config-data\") pod \"nova-scheduler-0\" (UID: \"f40e0fe8-470b-4092-a179-4e4df56f8900\") " pod="openstack/nova-scheduler-0" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.080286 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40e0fe8-470b-4092-a179-4e4df56f8900-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f40e0fe8-470b-4092-a179-4e4df56f8900\") " pod="openstack/nova-scheduler-0" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.080348 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6vr8\" (UniqueName: \"kubernetes.io/projected/f40e0fe8-470b-4092-a179-4e4df56f8900-kube-api-access-l6vr8\") pod \"nova-scheduler-0\" (UID: \"f40e0fe8-470b-4092-a179-4e4df56f8900\") " pod="openstack/nova-scheduler-0" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.086605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f40e0fe8-470b-4092-a179-4e4df56f8900-config-data\") pod \"nova-scheduler-0\" (UID: \"f40e0fe8-470b-4092-a179-4e4df56f8900\") " pod="openstack/nova-scheduler-0" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.089193 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f40e0fe8-470b-4092-a179-4e4df56f8900-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f40e0fe8-470b-4092-a179-4e4df56f8900\") " pod="openstack/nova-scheduler-0" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.104211 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6vr8\" (UniqueName: \"kubernetes.io/projected/f40e0fe8-470b-4092-a179-4e4df56f8900-kube-api-access-l6vr8\") pod \"nova-scheduler-0\" (UID: \"f40e0fe8-470b-4092-a179-4e4df56f8900\") " pod="openstack/nova-scheduler-0" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.172127 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:46398->10.217.0.197:8775: read: connection reset by peer" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.172132 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:46388->10.217.0.197:8775: read: connection reset by peer" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.204263 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.421227 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d80946-e077-4789-8c1f-f67180e2fb9f" path="/var/lib/kubelet/pods/22d80946-e077-4789-8c1f-f67180e2fb9f/volumes" Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.675108 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 14:49:10 crc kubenswrapper[4796]: W1125 14:49:10.676895 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf40e0fe8_470b_4092_a179_4e4df56f8900.slice/crio-a1f37dcde4976d75aa28f072bc44dc1c1273d08477c2442b9b68313cb3910509 WatchSource:0}: Error finding container a1f37dcde4976d75aa28f072bc44dc1c1273d08477c2442b9b68313cb3910509: Status 404 returned error can't find the container with id a1f37dcde4976d75aa28f072bc44dc1c1273d08477c2442b9b68313cb3910509 Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.812220 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f40e0fe8-470b-4092-a179-4e4df56f8900","Type":"ContainerStarted","Data":"a1f37dcde4976d75aa28f072bc44dc1c1273d08477c2442b9b68313cb3910509"} Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.815707 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"86950200-06a3-4ad0-9a40-d70deeba8ce3","Type":"ContainerStarted","Data":"06019c305230d1ae1a9d34afd74a40064c6dac34f1b854b43970c507f75550f9"} Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.820391 4796 generic.go:334] "Generic (PLEG): container finished" podID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerID="1db5422d68c5080078d38372d84c6ea0effe0dfdfb647d99b56eb7278c921f77" exitCode=0 Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.820555 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b7411dd4-cc53-4a32-82ea-03b3b51dbd55","Type":"ContainerDied","Data":"1db5422d68c5080078d38372d84c6ea0effe0dfdfb647d99b56eb7278c921f77"} Nov 25 14:49:10 crc kubenswrapper[4796]: I1125 14:49:10.852331 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.852309483 podStartE2EDuration="2.852309483s" podCreationTimestamp="2025-11-25 14:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:49:10.844514919 +0000 UTC m=+1479.187624343" watchObservedRunningTime="2025-11-25 14:49:10.852309483 +0000 UTC m=+1479.195418907" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.252430 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.404179 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-combined-ca-bundle\") pod \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.404266 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-config-data\") pod \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.404310 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-logs\") pod \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.404359 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-nova-metadata-tls-certs\") pod \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.404559 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls7mr\" (UniqueName: \"kubernetes.io/projected/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-kube-api-access-ls7mr\") pod \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\" (UID: \"b7411dd4-cc53-4a32-82ea-03b3b51dbd55\") " Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.405106 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-logs" (OuterVolumeSpecName: "logs") pod "b7411dd4-cc53-4a32-82ea-03b3b51dbd55" (UID: "b7411dd4-cc53-4a32-82ea-03b3b51dbd55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.409824 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-kube-api-access-ls7mr" (OuterVolumeSpecName: "kube-api-access-ls7mr") pod "b7411dd4-cc53-4a32-82ea-03b3b51dbd55" (UID: "b7411dd4-cc53-4a32-82ea-03b3b51dbd55"). InnerVolumeSpecName "kube-api-access-ls7mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.440760 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-config-data" (OuterVolumeSpecName: "config-data") pod "b7411dd4-cc53-4a32-82ea-03b3b51dbd55" (UID: "b7411dd4-cc53-4a32-82ea-03b3b51dbd55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.457832 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7411dd4-cc53-4a32-82ea-03b3b51dbd55" (UID: "b7411dd4-cc53-4a32-82ea-03b3b51dbd55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.464623 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b7411dd4-cc53-4a32-82ea-03b3b51dbd55" (UID: "b7411dd4-cc53-4a32-82ea-03b3b51dbd55"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.506224 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.506264 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-logs\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.506277 4796 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.506293 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls7mr\" (UniqueName: \"kubernetes.io/projected/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-kube-api-access-ls7mr\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.506306 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7411dd4-cc53-4a32-82ea-03b3b51dbd55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.834941 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b7411dd4-cc53-4a32-82ea-03b3b51dbd55","Type":"ContainerDied","Data":"2b9fd5ec6465b483150672fca922e3258eb123dcbf863f3d5c3b53caf4bc8603"} Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.834994 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.835007 4796 scope.go:117] "RemoveContainer" containerID="1db5422d68c5080078d38372d84c6ea0effe0dfdfb647d99b56eb7278c921f77" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.839177 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f40e0fe8-470b-4092-a179-4e4df56f8900","Type":"ContainerStarted","Data":"f55c581a8e03594c30f990e7a4732a4ff3631b51a28b5ec60ec229b0f213f486"} Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.859074 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.859055242 podStartE2EDuration="2.859055242s" podCreationTimestamp="2025-11-25 14:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:49:11.858264167 +0000 UTC m=+1480.201373611" watchObservedRunningTime="2025-11-25 14:49:11.859055242 +0000 UTC m=+1480.202164666" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.878865 4796 scope.go:117] "RemoveContainer" containerID="0813a9e9b442965f567f5d4190003390eb2e596b352dbeb05673ffb66ba926b2" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.904874 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.936005 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.945142 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:49:11 crc kubenswrapper[4796]: E1125 14:49:11.945552 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-metadata" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.945586 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-metadata" Nov 25 14:49:11 crc kubenswrapper[4796]: E1125 14:49:11.945607 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-log" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.945615 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-log" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.945839 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-log" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.945860 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" containerName="nova-metadata-metadata" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.946943 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.949326 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.949364 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 14:49:11 crc kubenswrapper[4796]: I1125 14:49:11.966141 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.117912 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/74f7062a-bcf7-494e-81ff-955f99fd6707-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.117958 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74f7062a-bcf7-494e-81ff-955f99fd6707-logs\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.117987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74f7062a-bcf7-494e-81ff-955f99fd6707-config-data\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.118382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74f7062a-bcf7-494e-81ff-955f99fd6707-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.118490 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrpg\" (UniqueName: \"kubernetes.io/projected/74f7062a-bcf7-494e-81ff-955f99fd6707-kube-api-access-khrpg\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.220668 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/74f7062a-bcf7-494e-81ff-955f99fd6707-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.220735 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74f7062a-bcf7-494e-81ff-955f99fd6707-logs\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.220795 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74f7062a-bcf7-494e-81ff-955f99fd6707-config-data\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.220921 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74f7062a-bcf7-494e-81ff-955f99fd6707-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.220957 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khrpg\" (UniqueName: \"kubernetes.io/projected/74f7062a-bcf7-494e-81ff-955f99fd6707-kube-api-access-khrpg\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.221930 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74f7062a-bcf7-494e-81ff-955f99fd6707-logs\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.231993 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/74f7062a-bcf7-494e-81ff-955f99fd6707-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.232271 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74f7062a-bcf7-494e-81ff-955f99fd6707-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.234696 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74f7062a-bcf7-494e-81ff-955f99fd6707-config-data\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.237312 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khrpg\" (UniqueName: \"kubernetes.io/projected/74f7062a-bcf7-494e-81ff-955f99fd6707-kube-api-access-khrpg\") pod \"nova-metadata-0\" (UID: \"74f7062a-bcf7-494e-81ff-955f99fd6707\") " pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.273550 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.428099 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7411dd4-cc53-4a32-82ea-03b3b51dbd55" path="/var/lib/kubelet/pods/b7411dd4-cc53-4a32-82ea-03b3b51dbd55/volumes" Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.813197 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 14:49:12 crc kubenswrapper[4796]: I1125 14:49:12.851158 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"74f7062a-bcf7-494e-81ff-955f99fd6707","Type":"ContainerStarted","Data":"e845a9dc3f6c5ec129f3d35d677c51d232c62c66947ad98b623f5ad56baf6240"} Nov 25 14:49:13 crc kubenswrapper[4796]: I1125 14:49:13.862912 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"74f7062a-bcf7-494e-81ff-955f99fd6707","Type":"ContainerStarted","Data":"4115f8181ee61390359ea477784aea98ae407a4ff4e45286650197ad7f04f0aa"} Nov 25 14:49:13 crc kubenswrapper[4796]: I1125 14:49:13.863445 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"74f7062a-bcf7-494e-81ff-955f99fd6707","Type":"ContainerStarted","Data":"beae081e584e97d057b66e924e9a7df7d8b836a711fcbb82f07d9a8af296810a"} Nov 25 14:49:13 crc kubenswrapper[4796]: I1125 14:49:13.884589 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.884557495 podStartE2EDuration="2.884557495s" podCreationTimestamp="2025-11-25 14:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:49:13.882524362 +0000 UTC m=+1482.225633806" watchObservedRunningTime="2025-11-25 14:49:13.884557495 +0000 UTC m=+1482.227666919" Nov 25 14:49:15 crc kubenswrapper[4796]: I1125 14:49:15.204777 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 14:49:17 crc kubenswrapper[4796]: I1125 14:49:17.273779 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 14:49:17 crc kubenswrapper[4796]: I1125 14:49:17.274206 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 14:49:18 crc kubenswrapper[4796]: I1125 14:49:18.548228 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 14:49:18 crc kubenswrapper[4796]: I1125 14:49:18.548652 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 14:49:19 crc kubenswrapper[4796]: I1125 14:49:19.514139 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:49:19 crc kubenswrapper[4796]: I1125 14:49:19.514226 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:49:19 crc kubenswrapper[4796]: I1125 14:49:19.557842 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="86950200-06a3-4ad0-9a40-d70deeba8ce3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 14:49:19 crc kubenswrapper[4796]: I1125 14:49:19.557884 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="86950200-06a3-4ad0-9a40-d70deeba8ce3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 14:49:20 crc kubenswrapper[4796]: I1125 14:49:20.205079 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 14:49:20 crc kubenswrapper[4796]: I1125 14:49:20.255671 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 14:49:21 crc kubenswrapper[4796]: I1125 14:49:21.025098 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 14:49:22 crc kubenswrapper[4796]: I1125 14:49:22.274629 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 14:49:22 crc kubenswrapper[4796]: I1125 14:49:22.275834 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 14:49:23 crc kubenswrapper[4796]: I1125 14:49:23.290839 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="74f7062a-bcf7-494e-81ff-955f99fd6707" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 14:49:23 crc kubenswrapper[4796]: I1125 14:49:23.292212 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="74f7062a-bcf7-494e-81ff-955f99fd6707" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 14:49:28 crc kubenswrapper[4796]: I1125 14:49:28.555555 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 14:49:28 crc kubenswrapper[4796]: I1125 14:49:28.557991 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 14:49:28 crc kubenswrapper[4796]: I1125 14:49:28.563304 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 14:49:28 crc kubenswrapper[4796]: I1125 14:49:28.564721 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 14:49:29 crc kubenswrapper[4796]: I1125 14:49:29.032034 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 14:49:29 crc kubenswrapper[4796]: I1125 14:49:29.039197 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 14:49:29 crc kubenswrapper[4796]: I1125 14:49:29.226501 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 14:49:32 crc kubenswrapper[4796]: I1125 14:49:32.281543 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 14:49:32 crc kubenswrapper[4796]: I1125 14:49:32.286128 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 14:49:32 crc kubenswrapper[4796]: I1125 14:49:32.291638 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 14:49:33 crc kubenswrapper[4796]: I1125 14:49:33.071428 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 14:49:41 crc kubenswrapper[4796]: I1125 14:49:41.066220 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 14:49:41 crc kubenswrapper[4796]: I1125 14:49:41.920391 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 14:49:45 crc kubenswrapper[4796]: I1125 14:49:45.160142 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="df357d5a-93ca-48cc-bcec-b01ba247136e" containerName="rabbitmq" containerID="cri-o://1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b" gracePeriod=604796 Nov 25 14:49:45 crc kubenswrapper[4796]: I1125 14:49:45.219280 4796 scope.go:117] "RemoveContainer" containerID="e3ab61e96b4b14dde9a37c7a8b99aab5a30ec927bb79c80dfdcc65657de84834" Nov 25 14:49:45 crc kubenswrapper[4796]: I1125 14:49:45.924460 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" containerName="rabbitmq" containerID="cri-o://12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a" gracePeriod=604796 Nov 25 14:49:49 crc kubenswrapper[4796]: I1125 14:49:49.514033 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:49:49 crc kubenswrapper[4796]: I1125 14:49:49.515113 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:49:51 crc kubenswrapper[4796]: I1125 14:49:51.384899 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="df357d5a-93ca-48cc-bcec-b01ba247136e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 25 14:49:51 crc kubenswrapper[4796]: I1125 14:49:51.685638 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 25 14:49:51 crc kubenswrapper[4796]: I1125 14:49:51.990151 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.061654 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-plugins\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.061720 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df357d5a-93ca-48cc-bcec-b01ba247136e-erlang-cookie-secret\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.061775 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df357d5a-93ca-48cc-bcec-b01ba247136e-pod-info\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.061796 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.061842 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-tls\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.061882 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-server-conf\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.061910 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-config-data\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.061947 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkmzx\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-kube-api-access-vkmzx\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.061972 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-confd\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.062055 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-erlang-cookie\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.062124 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-plugins-conf\") pod \"df357d5a-93ca-48cc-bcec-b01ba247136e\" (UID: \"df357d5a-93ca-48cc-bcec-b01ba247136e\") " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.063235 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.081062 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.081933 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.082301 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.092912 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.094052 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-kube-api-access-vkmzx" (OuterVolumeSpecName: "kube-api-access-vkmzx") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "kube-api-access-vkmzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.096605 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/df357d5a-93ca-48cc-bcec-b01ba247136e-pod-info" (OuterVolumeSpecName: "pod-info") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.103488 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df357d5a-93ca-48cc-bcec-b01ba247136e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.127550 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-config-data" (OuterVolumeSpecName: "config-data") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.165110 4796 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df357d5a-93ca-48cc-bcec-b01ba247136e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.165385 4796 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df357d5a-93ca-48cc-bcec-b01ba247136e-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.165464 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.165546 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.165635 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.165700 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkmzx\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-kube-api-access-vkmzx\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.165762 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.165819 4796 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.165878 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.208356 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.237122 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-server-conf" (OuterVolumeSpecName: "server-conf") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.268582 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.268610 4796 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df357d5a-93ca-48cc-bcec-b01ba247136e-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.288619 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "df357d5a-93ca-48cc-bcec-b01ba247136e" (UID: "df357d5a-93ca-48cc-bcec-b01ba247136e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.327050 4796 generic.go:334] "Generic (PLEG): container finished" podID="df357d5a-93ca-48cc-bcec-b01ba247136e" containerID="1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b" exitCode=0 Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.327778 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df357d5a-93ca-48cc-bcec-b01ba247136e","Type":"ContainerDied","Data":"1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b"} Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.327911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df357d5a-93ca-48cc-bcec-b01ba247136e","Type":"ContainerDied","Data":"84c922b6bf5dec75ade0e3eccb6293a40c8ff11ba8209679e57238bf5be8a933"} Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.328015 4796 scope.go:117] "RemoveContainer" containerID="1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.328260 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.369972 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df357d5a-93ca-48cc-bcec-b01ba247136e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.381804 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.382010 4796 scope.go:117] "RemoveContainer" containerID="2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.393918 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.419959 4796 scope.go:117] "RemoveContainer" containerID="1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b" Nov 25 14:49:52 crc kubenswrapper[4796]: E1125 14:49:52.424014 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b\": container with ID starting with 1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b not found: ID does not exist" containerID="1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.424055 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b"} err="failed to get container status \"1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b\": rpc error: code = NotFound desc = could not find container \"1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b\": container with ID starting with 1f0481822b5300edcf061559c56928ac61b6e1ce3f5ef850a84d3cc5af5a950b not found: ID does not exist" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.424084 4796 scope.go:117] "RemoveContainer" containerID="2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c" Nov 25 14:49:52 crc kubenswrapper[4796]: E1125 14:49:52.427023 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c\": container with ID starting with 2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c not found: ID does not exist" containerID="2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.427076 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c"} err="failed to get container status \"2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c\": rpc error: code = NotFound desc = could not find container \"2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c\": container with ID starting with 2f9c4eeced77b6eec1a9654b22885d08fe3f01dda9ebd117770b343c82ceaa1c not found: ID does not exist" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.429626 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df357d5a-93ca-48cc-bcec-b01ba247136e" path="/var/lib/kubelet/pods/df357d5a-93ca-48cc-bcec-b01ba247136e/volumes" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.430199 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 14:49:52 crc kubenswrapper[4796]: E1125 14:49:52.430504 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df357d5a-93ca-48cc-bcec-b01ba247136e" containerName="rabbitmq" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.430518 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="df357d5a-93ca-48cc-bcec-b01ba247136e" containerName="rabbitmq" Nov 25 14:49:52 crc kubenswrapper[4796]: E1125 14:49:52.430546 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df357d5a-93ca-48cc-bcec-b01ba247136e" containerName="setup-container" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.430552 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="df357d5a-93ca-48cc-bcec-b01ba247136e" containerName="setup-container" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.430775 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="df357d5a-93ca-48cc-bcec-b01ba247136e" containerName="rabbitmq" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.440291 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.440391 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.442471 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.444137 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.444306 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.444416 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.444637 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.444764 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-r8wzf" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.445944 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.576519 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.577655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bde17cd-d557-45b1-8796-d7293d21c038-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.577790 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.577898 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bde17cd-d557-45b1-8796-d7293d21c038-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.577956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bde17cd-d557-45b1-8796-d7293d21c038-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.578045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.578172 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.578327 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npscm\" (UniqueName: \"kubernetes.io/projected/0bde17cd-d557-45b1-8796-d7293d21c038-kube-api-access-npscm\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.578382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bde17cd-d557-45b1-8796-d7293d21c038-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.578417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bde17cd-d557-45b1-8796-d7293d21c038-config-data\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.578461 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.679814 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bde17cd-d557-45b1-8796-d7293d21c038-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.679872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bde17cd-d557-45b1-8796-d7293d21c038-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.679917 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.679964 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.679994 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npscm\" (UniqueName: \"kubernetes.io/projected/0bde17cd-d557-45b1-8796-d7293d21c038-kube-api-access-npscm\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.680014 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bde17cd-d557-45b1-8796-d7293d21c038-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.680034 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bde17cd-d557-45b1-8796-d7293d21c038-config-data\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.680060 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.680084 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.680101 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bde17cd-d557-45b1-8796-d7293d21c038-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.680136 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.681010 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.681382 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.681661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bde17cd-d557-45b1-8796-d7293d21c038-config-data\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.682131 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.683738 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bde17cd-d557-45b1-8796-d7293d21c038-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.682691 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bde17cd-d557-45b1-8796-d7293d21c038-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.685864 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.686147 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bde17cd-d557-45b1-8796-d7293d21c038-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.686672 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bde17cd-d557-45b1-8796-d7293d21c038-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.691176 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bde17cd-d557-45b1-8796-d7293d21c038-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.713896 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npscm\" (UniqueName: \"kubernetes.io/projected/0bde17cd-d557-45b1-8796-d7293d21c038-kube-api-access-npscm\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.724711 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0bde17cd-d557-45b1-8796-d7293d21c038\") " pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.825256 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 14:49:52 crc kubenswrapper[4796]: I1125 14:49:52.908258 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091180 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-config-data\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091288 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-erlang-cookie\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091327 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-plugins-conf\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091354 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz6p2\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-kube-api-access-gz6p2\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091402 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-confd\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091440 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1729cee4-39e5-4e3c-90ed-51b16a110a6a-erlang-cookie-secret\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091475 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-plugins\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091505 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-tls\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091550 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-server-conf\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091614 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1729cee4-39e5-4e3c-90ed-51b16a110a6a-pod-info\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.091636 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\" (UID: \"1729cee4-39e5-4e3c-90ed-51b16a110a6a\") " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.125897 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.126677 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.127829 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.128536 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-config-data" (OuterVolumeSpecName: "config-data") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.128955 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.136145 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-kube-api-access-gz6p2" (OuterVolumeSpecName: "kube-api-access-gz6p2") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "kube-api-access-gz6p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.141657 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1729cee4-39e5-4e3c-90ed-51b16a110a6a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.143492 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.152797 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1729cee4-39e5-4e3c-90ed-51b16a110a6a-pod-info" (OuterVolumeSpecName: "pod-info") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.194591 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.194627 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.194638 4796 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.194647 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz6p2\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-kube-api-access-gz6p2\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.194658 4796 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1729cee4-39e5-4e3c-90ed-51b16a110a6a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.194671 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.194680 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.194695 4796 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1729cee4-39e5-4e3c-90ed-51b16a110a6a-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.194717 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.201379 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.216208 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.227762 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-server-conf" (OuterVolumeSpecName: "server-conf") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.251135 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1729cee4-39e5-4e3c-90ed-51b16a110a6a" (UID: "1729cee4-39e5-4e3c-90ed-51b16a110a6a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.296126 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1729cee4-39e5-4e3c-90ed-51b16a110a6a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.296149 4796 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1729cee4-39e5-4e3c-90ed-51b16a110a6a-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.296158 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.337638 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bde17cd-d557-45b1-8796-d7293d21c038","Type":"ContainerStarted","Data":"3bce82eb62733a38f77f82755f7e77e328f9b09a712bd256b3a514ec72d4a50e"} Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.342339 4796 generic.go:334] "Generic (PLEG): container finished" podID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" containerID="12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a" exitCode=0 Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.342387 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1729cee4-39e5-4e3c-90ed-51b16a110a6a","Type":"ContainerDied","Data":"12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a"} Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.342417 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1729cee4-39e5-4e3c-90ed-51b16a110a6a","Type":"ContainerDied","Data":"4194d5f27c5d9412a620f2b0859b5330fae76cf094dd7d9e110177cc6418d04e"} Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.342437 4796 scope.go:117] "RemoveContainer" containerID="12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.342545 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.378148 4796 scope.go:117] "RemoveContainer" containerID="8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.381519 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.391882 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.415483 4796 scope.go:117] "RemoveContainer" containerID="12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a" Nov 25 14:49:53 crc kubenswrapper[4796]: E1125 14:49:53.416053 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a\": container with ID starting with 12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a not found: ID does not exist" containerID="12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.416090 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a"} err="failed to get container status \"12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a\": rpc error: code = NotFound desc = could not find container \"12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a\": container with ID starting with 12650c0cae01ad371c22b7ea4547c8bfec4c1a7fb39b02450cc1837ad38e5d1a not found: ID does not exist" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.416114 4796 scope.go:117] "RemoveContainer" containerID="8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d" Nov 25 14:49:53 crc kubenswrapper[4796]: E1125 14:49:53.416847 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d\": container with ID starting with 8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d not found: ID does not exist" containerID="8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.416868 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d"} err="failed to get container status \"8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d\": rpc error: code = NotFound desc = could not find container \"8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d\": container with ID starting with 8961effce604b4c965298d42d063ee066e28fd802e9ac7ff26c6935f9c6c981d not found: ID does not exist" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.420617 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 14:49:53 crc kubenswrapper[4796]: E1125 14:49:53.421011 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" containerName="setup-container" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.421027 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" containerName="setup-container" Nov 25 14:49:53 crc kubenswrapper[4796]: E1125 14:49:53.421039 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" containerName="rabbitmq" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.421045 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" containerName="rabbitmq" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.421251 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" containerName="rabbitmq" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.422259 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.427040 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bx2jc" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.427211 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.427386 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.427515 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.427675 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.427798 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.427902 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.468251 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.499822 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.499909 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.499938 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.499993 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.500017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.500043 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tpn\" (UniqueName: \"kubernetes.io/projected/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-kube-api-access-p9tpn\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.500074 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.500093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.500114 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.500139 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.500160 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602142 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602200 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602232 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9tpn\" (UniqueName: \"kubernetes.io/projected/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-kube-api-access-p9tpn\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602282 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602306 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602326 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602346 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602397 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602428 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.602453 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.603469 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.604054 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.605347 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.605367 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.606232 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.606424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.607234 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.607703 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.610168 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.619352 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.628126 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9tpn\" (UniqueName: \"kubernetes.io/projected/f5d14d1f-b7c5-4d86-9420-fbf8a044780c-kube-api-access-p9tpn\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.639489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f5d14d1f-b7c5-4d86-9420-fbf8a044780c\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:53 crc kubenswrapper[4796]: I1125 14:49:53.776965 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:49:54 crc kubenswrapper[4796]: I1125 14:49:54.280817 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 14:49:54 crc kubenswrapper[4796]: W1125 14:49:54.374824 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5d14d1f_b7c5_4d86_9420_fbf8a044780c.slice/crio-1739ba5203a5381bdcd3501c14e701c916b2bc9481393eb75654a464c6685e88 WatchSource:0}: Error finding container 1739ba5203a5381bdcd3501c14e701c916b2bc9481393eb75654a464c6685e88: Status 404 returned error can't find the container with id 1739ba5203a5381bdcd3501c14e701c916b2bc9481393eb75654a464c6685e88 Nov 25 14:49:54 crc kubenswrapper[4796]: I1125 14:49:54.431982 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1729cee4-39e5-4e3c-90ed-51b16a110a6a" path="/var/lib/kubelet/pods/1729cee4-39e5-4e3c-90ed-51b16a110a6a/volumes" Nov 25 14:49:55 crc kubenswrapper[4796]: I1125 14:49:55.362747 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f5d14d1f-b7c5-4d86-9420-fbf8a044780c","Type":"ContainerStarted","Data":"1739ba5203a5381bdcd3501c14e701c916b2bc9481393eb75654a464c6685e88"} Nov 25 14:49:55 crc kubenswrapper[4796]: I1125 14:49:55.365415 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bde17cd-d557-45b1-8796-d7293d21c038","Type":"ContainerStarted","Data":"19b9bfc3e15f85c3c5d87312c18c5d256338068e0d3922757dd96e5b5689ad4d"} Nov 25 14:49:56 crc kubenswrapper[4796]: I1125 14:49:56.375138 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f5d14d1f-b7c5-4d86-9420-fbf8a044780c","Type":"ContainerStarted","Data":"880cc9fbc1f78d21b5a0bf92f875cdca4539c7db14abc1241e72eb7b74ef8b96"} Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.396682 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sxk9l"] Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.399777 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.406177 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxk9l"] Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.499402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-utilities\") pod \"redhat-marketplace-sxk9l\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.499694 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-catalog-content\") pod \"redhat-marketplace-sxk9l\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.499771 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nfrw\" (UniqueName: \"kubernetes.io/projected/fe08d471-ea80-450b-a404-813f7ded819e-kube-api-access-2nfrw\") pod \"redhat-marketplace-sxk9l\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.602563 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-utilities\") pod \"redhat-marketplace-sxk9l\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.602872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-catalog-content\") pod \"redhat-marketplace-sxk9l\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.602920 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nfrw\" (UniqueName: \"kubernetes.io/projected/fe08d471-ea80-450b-a404-813f7ded819e-kube-api-access-2nfrw\") pod \"redhat-marketplace-sxk9l\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.603935 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-utilities\") pod \"redhat-marketplace-sxk9l\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.604123 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-catalog-content\") pod \"redhat-marketplace-sxk9l\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.621443 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nfrw\" (UniqueName: \"kubernetes.io/projected/fe08d471-ea80-450b-a404-813f7ded819e-kube-api-access-2nfrw\") pod \"redhat-marketplace-sxk9l\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:58 crc kubenswrapper[4796]: I1125 14:49:58.731029 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:49:59 crc kubenswrapper[4796]: I1125 14:49:59.212369 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxk9l"] Nov 25 14:49:59 crc kubenswrapper[4796]: I1125 14:49:59.402492 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxk9l" event={"ID":"fe08d471-ea80-450b-a404-813f7ded819e","Type":"ContainerStarted","Data":"8ecd0d4debdade1133dd0b83b29d048a4656a9ed1e844d574879a4c89e910459"} Nov 25 14:49:59 crc kubenswrapper[4796]: I1125 14:49:59.995995 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mkj6c"] Nov 25 14:49:59 crc kubenswrapper[4796]: I1125 14:49:59.998192 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.001678 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.014085 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mkj6c"] Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.045413 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-config\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.045505 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.045550 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.045721 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.045822 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.045987 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwjb\" (UniqueName: \"kubernetes.io/projected/1965d8b3-fa1b-4330-bd29-12ee5ff08645-kube-api-access-bmwjb\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.046024 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.147522 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmwjb\" (UniqueName: \"kubernetes.io/projected/1965d8b3-fa1b-4330-bd29-12ee5ff08645-kube-api-access-bmwjb\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.147613 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.147734 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-config\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.147785 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.147829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.147854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.147936 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.148817 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.148841 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-config\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.148885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.148885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.149082 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.149530 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.166489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmwjb\" (UniqueName: \"kubernetes.io/projected/1965d8b3-fa1b-4330-bd29-12ee5ff08645-kube-api-access-bmwjb\") pod \"dnsmasq-dns-79bd4cc8c9-mkj6c\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.374154 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.422426 4796 generic.go:334] "Generic (PLEG): container finished" podID="fe08d471-ea80-450b-a404-813f7ded819e" containerID="94a71a1b0d3d6e415c65793e5d1bec17eadc7955b63ee7b563584f5a4d65d396" exitCode=0 Nov 25 14:50:00 crc kubenswrapper[4796]: I1125 14:50:00.427319 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxk9l" event={"ID":"fe08d471-ea80-450b-a404-813f7ded819e","Type":"ContainerDied","Data":"94a71a1b0d3d6e415c65793e5d1bec17eadc7955b63ee7b563584f5a4d65d396"} Nov 25 14:50:01 crc kubenswrapper[4796]: I1125 14:50:01.023407 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mkj6c"] Nov 25 14:50:01 crc kubenswrapper[4796]: I1125 14:50:01.433590 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" event={"ID":"1965d8b3-fa1b-4330-bd29-12ee5ff08645","Type":"ContainerStarted","Data":"c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40"} Nov 25 14:50:01 crc kubenswrapper[4796]: I1125 14:50:01.433927 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" event={"ID":"1965d8b3-fa1b-4330-bd29-12ee5ff08645","Type":"ContainerStarted","Data":"c542deb9f75e7b65ca4d59ab5c185d9d1e753754d580cc330137e97cdfa9b0ec"} Nov 25 14:50:02 crc kubenswrapper[4796]: I1125 14:50:02.445176 4796 generic.go:334] "Generic (PLEG): container finished" podID="1965d8b3-fa1b-4330-bd29-12ee5ff08645" containerID="c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40" exitCode=0 Nov 25 14:50:02 crc kubenswrapper[4796]: I1125 14:50:02.445368 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" event={"ID":"1965d8b3-fa1b-4330-bd29-12ee5ff08645","Type":"ContainerDied","Data":"c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40"} Nov 25 14:50:03 crc kubenswrapper[4796]: I1125 14:50:03.454833 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" event={"ID":"1965d8b3-fa1b-4330-bd29-12ee5ff08645","Type":"ContainerStarted","Data":"686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345"} Nov 25 14:50:03 crc kubenswrapper[4796]: I1125 14:50:03.455249 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:03 crc kubenswrapper[4796]: I1125 14:50:03.457755 4796 generic.go:334] "Generic (PLEG): container finished" podID="fe08d471-ea80-450b-a404-813f7ded819e" containerID="f15cf1c30f4179a5137dda7c89dddd69166fdde7338cd1dc89f182ba3c5f36dc" exitCode=0 Nov 25 14:50:03 crc kubenswrapper[4796]: I1125 14:50:03.457806 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxk9l" event={"ID":"fe08d471-ea80-450b-a404-813f7ded819e","Type":"ContainerDied","Data":"f15cf1c30f4179a5137dda7c89dddd69166fdde7338cd1dc89f182ba3c5f36dc"} Nov 25 14:50:03 crc kubenswrapper[4796]: I1125 14:50:03.482465 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" podStartSLOduration=4.482448074 podStartE2EDuration="4.482448074s" podCreationTimestamp="2025-11-25 14:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:50:03.47463981 +0000 UTC m=+1531.817749234" watchObservedRunningTime="2025-11-25 14:50:03.482448074 +0000 UTC m=+1531.825557488" Nov 25 14:50:05 crc kubenswrapper[4796]: I1125 14:50:05.481641 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxk9l" event={"ID":"fe08d471-ea80-450b-a404-813f7ded819e","Type":"ContainerStarted","Data":"363231601cdf33cbf27464f7007383d131448c1a1d91d2e1643a3a0e6aa09674"} Nov 25 14:50:05 crc kubenswrapper[4796]: I1125 14:50:05.508235 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sxk9l" podStartSLOduration=2.900867055 podStartE2EDuration="7.508215255s" podCreationTimestamp="2025-11-25 14:49:58 +0000 UTC" firstStartedPulling="2025-11-25 14:50:00.430771834 +0000 UTC m=+1528.773881258" lastFinishedPulling="2025-11-25 14:50:05.038120034 +0000 UTC m=+1533.381229458" observedRunningTime="2025-11-25 14:50:05.499499762 +0000 UTC m=+1533.842609196" watchObservedRunningTime="2025-11-25 14:50:05.508215255 +0000 UTC m=+1533.851324679" Nov 25 14:50:08 crc kubenswrapper[4796]: I1125 14:50:08.731692 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:50:08 crc kubenswrapper[4796]: I1125 14:50:08.732317 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:50:08 crc kubenswrapper[4796]: I1125 14:50:08.785312 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.376399 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.449765 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rwcsm"] Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.450033 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" podUID="a5fc9195-8dc5-407c-9d3b-67b134be75f3" containerName="dnsmasq-dns" containerID="cri-o://98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec" gracePeriod=10 Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.591680 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-tjxqx"] Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.598039 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.614801 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-tjxqx"] Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.781374 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.781948 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mnkc\" (UniqueName: \"kubernetes.io/projected/64408db4-ea13-40ee-b40d-ce6e489f2b82-kube-api-access-5mnkc\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.782068 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-dns-svc\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.782386 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.782436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.782469 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-config\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.782673 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.884194 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.884288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.884312 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mnkc\" (UniqueName: \"kubernetes.io/projected/64408db4-ea13-40ee-b40d-ce6e489f2b82-kube-api-access-5mnkc\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.884339 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-dns-svc\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.884381 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.884401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.884416 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-config\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.885946 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.886021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.886212 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.886644 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-dns-svc\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.886771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.886815 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64408db4-ea13-40ee-b40d-ce6e489f2b82-config\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.913648 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mnkc\" (UniqueName: \"kubernetes.io/projected/64408db4-ea13-40ee-b40d-ce6e489f2b82-kube-api-access-5mnkc\") pod \"dnsmasq-dns-55478c4467-tjxqx\" (UID: \"64408db4-ea13-40ee-b40d-ce6e489f2b82\") " pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:10 crc kubenswrapper[4796]: I1125 14:50:10.933622 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.090097 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.191994 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-config\") pod \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.192069 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-swift-storage-0\") pod \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.192101 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-nb\") pod \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.192189 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98k2x\" (UniqueName: \"kubernetes.io/projected/a5fc9195-8dc5-407c-9d3b-67b134be75f3-kube-api-access-98k2x\") pod \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.192278 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-sb\") pod \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.192351 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-svc\") pod \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\" (UID: \"a5fc9195-8dc5-407c-9d3b-67b134be75f3\") " Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.198003 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5fc9195-8dc5-407c-9d3b-67b134be75f3-kube-api-access-98k2x" (OuterVolumeSpecName: "kube-api-access-98k2x") pod "a5fc9195-8dc5-407c-9d3b-67b134be75f3" (UID: "a5fc9195-8dc5-407c-9d3b-67b134be75f3"). InnerVolumeSpecName "kube-api-access-98k2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.246973 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-config" (OuterVolumeSpecName: "config") pod "a5fc9195-8dc5-407c-9d3b-67b134be75f3" (UID: "a5fc9195-8dc5-407c-9d3b-67b134be75f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.252344 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a5fc9195-8dc5-407c-9d3b-67b134be75f3" (UID: "a5fc9195-8dc5-407c-9d3b-67b134be75f3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.263139 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5fc9195-8dc5-407c-9d3b-67b134be75f3" (UID: "a5fc9195-8dc5-407c-9d3b-67b134be75f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.264275 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a5fc9195-8dc5-407c-9d3b-67b134be75f3" (UID: "a5fc9195-8dc5-407c-9d3b-67b134be75f3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.267361 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a5fc9195-8dc5-407c-9d3b-67b134be75f3" (UID: "a5fc9195-8dc5-407c-9d3b-67b134be75f3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.295062 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.295104 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.295117 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.295129 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.295141 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a5fc9195-8dc5-407c-9d3b-67b134be75f3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.295154 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98k2x\" (UniqueName: \"kubernetes.io/projected/a5fc9195-8dc5-407c-9d3b-67b134be75f3-kube-api-access-98k2x\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.447449 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-tjxqx"] Nov 25 14:50:11 crc kubenswrapper[4796]: W1125 14:50:11.457434 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64408db4_ea13_40ee_b40d_ce6e489f2b82.slice/crio-1dbcee845c10ea84dd4b51355080985f61207ad11a7542197a439b91667cc5e7 WatchSource:0}: Error finding container 1dbcee845c10ea84dd4b51355080985f61207ad11a7542197a439b91667cc5e7: Status 404 returned error can't find the container with id 1dbcee845c10ea84dd4b51355080985f61207ad11a7542197a439b91667cc5e7 Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.549002 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-tjxqx" event={"ID":"64408db4-ea13-40ee-b40d-ce6e489f2b82","Type":"ContainerStarted","Data":"1dbcee845c10ea84dd4b51355080985f61207ad11a7542197a439b91667cc5e7"} Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.552104 4796 generic.go:334] "Generic (PLEG): container finished" podID="a5fc9195-8dc5-407c-9d3b-67b134be75f3" containerID="98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec" exitCode=0 Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.552140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" event={"ID":"a5fc9195-8dc5-407c-9d3b-67b134be75f3","Type":"ContainerDied","Data":"98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec"} Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.552159 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" event={"ID":"a5fc9195-8dc5-407c-9d3b-67b134be75f3","Type":"ContainerDied","Data":"36598fe32fbb33179bbacf6a1872027c150bc02fd948e95562ddfb9a9a4bdd5c"} Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.552186 4796 scope.go:117] "RemoveContainer" containerID="98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.552306 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rwcsm" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.660287 4796 scope.go:117] "RemoveContainer" containerID="9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.690540 4796 scope.go:117] "RemoveContainer" containerID="98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.690707 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rwcsm"] Nov 25 14:50:11 crc kubenswrapper[4796]: E1125 14:50:11.691316 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec\": container with ID starting with 98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec not found: ID does not exist" containerID="98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.691346 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec"} err="failed to get container status \"98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec\": rpc error: code = NotFound desc = could not find container \"98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec\": container with ID starting with 98cbdbb576bd93914e34146db0ecea22151d9b2dfa109010326f0d7e1fa393ec not found: ID does not exist" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.691388 4796 scope.go:117] "RemoveContainer" containerID="9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d" Nov 25 14:50:11 crc kubenswrapper[4796]: E1125 14:50:11.691855 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d\": container with ID starting with 9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d not found: ID does not exist" containerID="9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.691910 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d"} err="failed to get container status \"9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d\": rpc error: code = NotFound desc = could not find container \"9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d\": container with ID starting with 9db02dd6cb2e9fc5e49bd5421b28db8eaa8e79aedcea97f22e02193b6a5c003d not found: ID does not exist" Nov 25 14:50:11 crc kubenswrapper[4796]: I1125 14:50:11.696610 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rwcsm"] Nov 25 14:50:12 crc kubenswrapper[4796]: I1125 14:50:12.447775 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5fc9195-8dc5-407c-9d3b-67b134be75f3" path="/var/lib/kubelet/pods/a5fc9195-8dc5-407c-9d3b-67b134be75f3/volumes" Nov 25 14:50:12 crc kubenswrapper[4796]: I1125 14:50:12.563518 4796 generic.go:334] "Generic (PLEG): container finished" podID="64408db4-ea13-40ee-b40d-ce6e489f2b82" containerID="4b415ec6fb9d81a265bb63d4338c3b01a1bc8d4d7fb0968e9f6e0a6f15c58d48" exitCode=0 Nov 25 14:50:12 crc kubenswrapper[4796]: I1125 14:50:12.563598 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-tjxqx" event={"ID":"64408db4-ea13-40ee-b40d-ce6e489f2b82","Type":"ContainerDied","Data":"4b415ec6fb9d81a265bb63d4338c3b01a1bc8d4d7fb0968e9f6e0a6f15c58d48"} Nov 25 14:50:13 crc kubenswrapper[4796]: I1125 14:50:13.578853 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-tjxqx" event={"ID":"64408db4-ea13-40ee-b40d-ce6e489f2b82","Type":"ContainerStarted","Data":"1c8006d729d94d789652de04757e7ef8488ebe07cb702122cc42ec552129bee4"} Nov 25 14:50:13 crc kubenswrapper[4796]: I1125 14:50:13.579933 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:13 crc kubenswrapper[4796]: I1125 14:50:13.611102 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-tjxqx" podStartSLOduration=3.611079955 podStartE2EDuration="3.611079955s" podCreationTimestamp="2025-11-25 14:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:50:13.601873256 +0000 UTC m=+1541.944982680" watchObservedRunningTime="2025-11-25 14:50:13.611079955 +0000 UTC m=+1541.954189379" Nov 25 14:50:18 crc kubenswrapper[4796]: I1125 14:50:18.783992 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:50:18 crc kubenswrapper[4796]: I1125 14:50:18.838114 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxk9l"] Nov 25 14:50:19 crc kubenswrapper[4796]: I1125 14:50:19.513693 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:50:19 crc kubenswrapper[4796]: I1125 14:50:19.513788 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:50:19 crc kubenswrapper[4796]: I1125 14:50:19.513860 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:50:19 crc kubenswrapper[4796]: I1125 14:50:19.515010 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:50:19 crc kubenswrapper[4796]: I1125 14:50:19.515163 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" gracePeriod=600 Nov 25 14:50:19 crc kubenswrapper[4796]: I1125 14:50:19.646418 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" exitCode=0 Nov 25 14:50:19 crc kubenswrapper[4796]: I1125 14:50:19.646500 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7"} Nov 25 14:50:19 crc kubenswrapper[4796]: I1125 14:50:19.646556 4796 scope.go:117] "RemoveContainer" containerID="b89880c276465411a1df30a0fcd1ff1a63ffefdebe8f12fb134cee10c6604130" Nov 25 14:50:19 crc kubenswrapper[4796]: I1125 14:50:19.646620 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sxk9l" podUID="fe08d471-ea80-450b-a404-813f7ded819e" containerName="registry-server" containerID="cri-o://363231601cdf33cbf27464f7007383d131448c1a1d91d2e1643a3a0e6aa09674" gracePeriod=2 Nov 25 14:50:19 crc kubenswrapper[4796]: E1125 14:50:19.878684 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.658466 4796 generic.go:334] "Generic (PLEG): container finished" podID="fe08d471-ea80-450b-a404-813f7ded819e" containerID="363231601cdf33cbf27464f7007383d131448c1a1d91d2e1643a3a0e6aa09674" exitCode=0 Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.658543 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxk9l" event={"ID":"fe08d471-ea80-450b-a404-813f7ded819e","Type":"ContainerDied","Data":"363231601cdf33cbf27464f7007383d131448c1a1d91d2e1643a3a0e6aa09674"} Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.659155 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sxk9l" event={"ID":"fe08d471-ea80-450b-a404-813f7ded819e","Type":"ContainerDied","Data":"8ecd0d4debdade1133dd0b83b29d048a4656a9ed1e844d574879a4c89e910459"} Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.659185 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ecd0d4debdade1133dd0b83b29d048a4656a9ed1e844d574879a4c89e910459" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.661373 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:50:20 crc kubenswrapper[4796]: E1125 14:50:20.661846 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.695210 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.789088 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nfrw\" (UniqueName: \"kubernetes.io/projected/fe08d471-ea80-450b-a404-813f7ded819e-kube-api-access-2nfrw\") pod \"fe08d471-ea80-450b-a404-813f7ded819e\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.789170 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-utilities\") pod \"fe08d471-ea80-450b-a404-813f7ded819e\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.789294 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-catalog-content\") pod \"fe08d471-ea80-450b-a404-813f7ded819e\" (UID: \"fe08d471-ea80-450b-a404-813f7ded819e\") " Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.790903 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-utilities" (OuterVolumeSpecName: "utilities") pod "fe08d471-ea80-450b-a404-813f7ded819e" (UID: "fe08d471-ea80-450b-a404-813f7ded819e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.796449 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe08d471-ea80-450b-a404-813f7ded819e-kube-api-access-2nfrw" (OuterVolumeSpecName: "kube-api-access-2nfrw") pod "fe08d471-ea80-450b-a404-813f7ded819e" (UID: "fe08d471-ea80-450b-a404-813f7ded819e"). InnerVolumeSpecName "kube-api-access-2nfrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.811973 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe08d471-ea80-450b-a404-813f7ded819e" (UID: "fe08d471-ea80-450b-a404-813f7ded819e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.892103 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.892161 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nfrw\" (UniqueName: \"kubernetes.io/projected/fe08d471-ea80-450b-a404-813f7ded819e-kube-api-access-2nfrw\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.892179 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe08d471-ea80-450b-a404-813f7ded819e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:20 crc kubenswrapper[4796]: I1125 14:50:20.935740 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-tjxqx" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:20.997646 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mkj6c"] Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:20.998017 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" podUID="1965d8b3-fa1b-4330-bd29-12ee5ff08645" containerName="dnsmasq-dns" containerID="cri-o://686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345" gracePeriod=10 Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.561312 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.673159 4796 generic.go:334] "Generic (PLEG): container finished" podID="1965d8b3-fa1b-4330-bd29-12ee5ff08645" containerID="686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345" exitCode=0 Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.673249 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sxk9l" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.673253 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.673248 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" event={"ID":"1965d8b3-fa1b-4330-bd29-12ee5ff08645","Type":"ContainerDied","Data":"686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345"} Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.673323 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mkj6c" event={"ID":"1965d8b3-fa1b-4330-bd29-12ee5ff08645","Type":"ContainerDied","Data":"c542deb9f75e7b65ca4d59ab5c185d9d1e753754d580cc330137e97cdfa9b0ec"} Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.673353 4796 scope.go:117] "RemoveContainer" containerID="686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.707805 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-swift-storage-0\") pod \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.707884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-nb\") pod \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.707930 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-openstack-edpm-ipam\") pod \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.708007 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-svc\") pod \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.708036 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-sb\") pod \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.708128 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmwjb\" (UniqueName: \"kubernetes.io/projected/1965d8b3-fa1b-4330-bd29-12ee5ff08645-kube-api-access-bmwjb\") pod \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.708161 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-config\") pod \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\" (UID: \"1965d8b3-fa1b-4330-bd29-12ee5ff08645\") " Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.709126 4796 scope.go:117] "RemoveContainer" containerID="c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.712388 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxk9l"] Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.721890 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sxk9l"] Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.722364 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1965d8b3-fa1b-4330-bd29-12ee5ff08645-kube-api-access-bmwjb" (OuterVolumeSpecName: "kube-api-access-bmwjb") pod "1965d8b3-fa1b-4330-bd29-12ee5ff08645" (UID: "1965d8b3-fa1b-4330-bd29-12ee5ff08645"). InnerVolumeSpecName "kube-api-access-bmwjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.766252 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1965d8b3-fa1b-4330-bd29-12ee5ff08645" (UID: "1965d8b3-fa1b-4330-bd29-12ee5ff08645"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.767908 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1965d8b3-fa1b-4330-bd29-12ee5ff08645" (UID: "1965d8b3-fa1b-4330-bd29-12ee5ff08645"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.770340 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1965d8b3-fa1b-4330-bd29-12ee5ff08645" (UID: "1965d8b3-fa1b-4330-bd29-12ee5ff08645"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.774764 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "1965d8b3-fa1b-4330-bd29-12ee5ff08645" (UID: "1965d8b3-fa1b-4330-bd29-12ee5ff08645"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.777208 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-config" (OuterVolumeSpecName: "config") pod "1965d8b3-fa1b-4330-bd29-12ee5ff08645" (UID: "1965d8b3-fa1b-4330-bd29-12ee5ff08645"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.791282 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1965d8b3-fa1b-4330-bd29-12ee5ff08645" (UID: "1965d8b3-fa1b-4330-bd29-12ee5ff08645"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.811216 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.811246 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.811257 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmwjb\" (UniqueName: \"kubernetes.io/projected/1965d8b3-fa1b-4330-bd29-12ee5ff08645-kube-api-access-bmwjb\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.811266 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.811274 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.811281 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.811289 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1965d8b3-fa1b-4330-bd29-12ee5ff08645-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.890031 4796 scope.go:117] "RemoveContainer" containerID="686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345" Nov 25 14:50:21 crc kubenswrapper[4796]: E1125 14:50:21.890689 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345\": container with ID starting with 686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345 not found: ID does not exist" containerID="686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.890753 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345"} err="failed to get container status \"686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345\": rpc error: code = NotFound desc = could not find container \"686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345\": container with ID starting with 686cb36958470ccda182d25e2d44cdfdc3221a0df62afaa2aff8c892f4565345 not found: ID does not exist" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.890800 4796 scope.go:117] "RemoveContainer" containerID="c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40" Nov 25 14:50:21 crc kubenswrapper[4796]: E1125 14:50:21.891229 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40\": container with ID starting with c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40 not found: ID does not exist" containerID="c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40" Nov 25 14:50:21 crc kubenswrapper[4796]: I1125 14:50:21.892081 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40"} err="failed to get container status \"c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40\": rpc error: code = NotFound desc = could not find container \"c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40\": container with ID starting with c67037208af6237dabc8962cf167886ced3bb3b5e7b06fc86d89905dba03ed40 not found: ID does not exist" Nov 25 14:50:22 crc kubenswrapper[4796]: I1125 14:50:22.008845 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mkj6c"] Nov 25 14:50:22 crc kubenswrapper[4796]: I1125 14:50:22.018250 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mkj6c"] Nov 25 14:50:22 crc kubenswrapper[4796]: I1125 14:50:22.426991 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1965d8b3-fa1b-4330-bd29-12ee5ff08645" path="/var/lib/kubelet/pods/1965d8b3-fa1b-4330-bd29-12ee5ff08645/volumes" Nov 25 14:50:22 crc kubenswrapper[4796]: I1125 14:50:22.428017 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe08d471-ea80-450b-a404-813f7ded819e" path="/var/lib/kubelet/pods/fe08d471-ea80-450b-a404-813f7ded819e/volumes" Nov 25 14:50:26 crc kubenswrapper[4796]: I1125 14:50:26.727310 4796 generic.go:334] "Generic (PLEG): container finished" podID="0bde17cd-d557-45b1-8796-d7293d21c038" containerID="19b9bfc3e15f85c3c5d87312c18c5d256338068e0d3922757dd96e5b5689ad4d" exitCode=0 Nov 25 14:50:26 crc kubenswrapper[4796]: I1125 14:50:26.727513 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bde17cd-d557-45b1-8796-d7293d21c038","Type":"ContainerDied","Data":"19b9bfc3e15f85c3c5d87312c18c5d256338068e0d3922757dd96e5b5689ad4d"} Nov 25 14:50:27 crc kubenswrapper[4796]: I1125 14:50:27.745367 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bde17cd-d557-45b1-8796-d7293d21c038","Type":"ContainerStarted","Data":"26045295e14919e5ecc06311c234560f6ca29a5cae6218c54142790f115f63b8"} Nov 25 14:50:27 crc kubenswrapper[4796]: I1125 14:50:27.747807 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 14:50:27 crc kubenswrapper[4796]: I1125 14:50:27.785745 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.785726656 podStartE2EDuration="35.785726656s" podCreationTimestamp="2025-11-25 14:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:50:27.779113989 +0000 UTC m=+1556.122223423" watchObservedRunningTime="2025-11-25 14:50:27.785726656 +0000 UTC m=+1556.128836080" Nov 25 14:50:27 crc kubenswrapper[4796]: E1125 14:50:27.993271 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5d14d1f_b7c5_4d86_9420_fbf8a044780c.slice/crio-880cc9fbc1f78d21b5a0bf92f875cdca4539c7db14abc1241e72eb7b74ef8b96.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5d14d1f_b7c5_4d86_9420_fbf8a044780c.slice/crio-conmon-880cc9fbc1f78d21b5a0bf92f875cdca4539c7db14abc1241e72eb7b74ef8b96.scope\": RecentStats: unable to find data in memory cache]" Nov 25 14:50:28 crc kubenswrapper[4796]: I1125 14:50:28.756523 4796 generic.go:334] "Generic (PLEG): container finished" podID="f5d14d1f-b7c5-4d86-9420-fbf8a044780c" containerID="880cc9fbc1f78d21b5a0bf92f875cdca4539c7db14abc1241e72eb7b74ef8b96" exitCode=0 Nov 25 14:50:28 crc kubenswrapper[4796]: I1125 14:50:28.756619 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f5d14d1f-b7c5-4d86-9420-fbf8a044780c","Type":"ContainerDied","Data":"880cc9fbc1f78d21b5a0bf92f875cdca4539c7db14abc1241e72eb7b74ef8b96"} Nov 25 14:50:29 crc kubenswrapper[4796]: I1125 14:50:29.769298 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f5d14d1f-b7c5-4d86-9420-fbf8a044780c","Type":"ContainerStarted","Data":"9831cbcb40e4a317a8c1bf0c849a412cdb11d5c1b1d6c1c33ae73b740c8d29b6"} Nov 25 14:50:29 crc kubenswrapper[4796]: I1125 14:50:29.769939 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:50:29 crc kubenswrapper[4796]: I1125 14:50:29.805237 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.80521437 podStartE2EDuration="36.80521437s" podCreationTimestamp="2025-11-25 14:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:50:29.793150403 +0000 UTC m=+1558.136259837" watchObservedRunningTime="2025-11-25 14:50:29.80521437 +0000 UTC m=+1558.148323804" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.409423 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:50:33 crc kubenswrapper[4796]: E1125 14:50:33.410177 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.538708 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q"] Nov 25 14:50:33 crc kubenswrapper[4796]: E1125 14:50:33.539421 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fc9195-8dc5-407c-9d3b-67b134be75f3" containerName="init" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.539549 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fc9195-8dc5-407c-9d3b-67b134be75f3" containerName="init" Nov 25 14:50:33 crc kubenswrapper[4796]: E1125 14:50:33.539676 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1965d8b3-fa1b-4330-bd29-12ee5ff08645" containerName="init" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.539777 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1965d8b3-fa1b-4330-bd29-12ee5ff08645" containerName="init" Nov 25 14:50:33 crc kubenswrapper[4796]: E1125 14:50:33.539876 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fc9195-8dc5-407c-9d3b-67b134be75f3" containerName="dnsmasq-dns" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.539989 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fc9195-8dc5-407c-9d3b-67b134be75f3" containerName="dnsmasq-dns" Nov 25 14:50:33 crc kubenswrapper[4796]: E1125 14:50:33.540083 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe08d471-ea80-450b-a404-813f7ded819e" containerName="extract-content" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.540158 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe08d471-ea80-450b-a404-813f7ded819e" containerName="extract-content" Nov 25 14:50:33 crc kubenswrapper[4796]: E1125 14:50:33.540241 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe08d471-ea80-450b-a404-813f7ded819e" containerName="registry-server" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.540329 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe08d471-ea80-450b-a404-813f7ded819e" containerName="registry-server" Nov 25 14:50:33 crc kubenswrapper[4796]: E1125 14:50:33.540419 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe08d471-ea80-450b-a404-813f7ded819e" containerName="extract-utilities" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.540508 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe08d471-ea80-450b-a404-813f7ded819e" containerName="extract-utilities" Nov 25 14:50:33 crc kubenswrapper[4796]: E1125 14:50:33.540664 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1965d8b3-fa1b-4330-bd29-12ee5ff08645" containerName="dnsmasq-dns" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.540742 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1965d8b3-fa1b-4330-bd29-12ee5ff08645" containerName="dnsmasq-dns" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.541075 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1965d8b3-fa1b-4330-bd29-12ee5ff08645" containerName="dnsmasq-dns" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.541200 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fc9195-8dc5-407c-9d3b-67b134be75f3" containerName="dnsmasq-dns" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.541313 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe08d471-ea80-450b-a404-813f7ded819e" containerName="registry-server" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.542178 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.547074 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.547110 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.547189 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.547084 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.551687 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q"] Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.641627 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.641681 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9z8k\" (UniqueName: \"kubernetes.io/projected/6699babf-2b9f-432c-b0fd-60452bb9ad6b-kube-api-access-d9z8k\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.641752 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.641771 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.743889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.743960 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9z8k\" (UniqueName: \"kubernetes.io/projected/6699babf-2b9f-432c-b0fd-60452bb9ad6b-kube-api-access-d9z8k\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.744037 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.744066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.749703 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.749736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.761488 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.765468 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9z8k\" (UniqueName: \"kubernetes.io/projected/6699babf-2b9f-432c-b0fd-60452bb9ad6b-kube-api-access-d9z8k\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:33 crc kubenswrapper[4796]: I1125 14:50:33.861486 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:50:34 crc kubenswrapper[4796]: I1125 14:50:34.397112 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q"] Nov 25 14:50:34 crc kubenswrapper[4796]: I1125 14:50:34.814480 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" event={"ID":"6699babf-2b9f-432c-b0fd-60452bb9ad6b","Type":"ContainerStarted","Data":"02ffeee18b422df4fe908def57d4ef3a95c6b8570c9d7ebfda15fbba4081842c"} Nov 25 14:50:42 crc kubenswrapper[4796]: I1125 14:50:42.828745 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 14:50:43 crc kubenswrapper[4796]: I1125 14:50:43.782818 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 14:50:46 crc kubenswrapper[4796]: I1125 14:50:46.409122 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:50:46 crc kubenswrapper[4796]: E1125 14:50:46.409690 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:50:48 crc kubenswrapper[4796]: I1125 14:50:48.035549 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" event={"ID":"6699babf-2b9f-432c-b0fd-60452bb9ad6b","Type":"ContainerStarted","Data":"09d6dc2713f036ab3635d8dff6831b301f34fac8b821940df47bf133f265c0ce"} Nov 25 14:50:48 crc kubenswrapper[4796]: I1125 14:50:48.068235 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" podStartSLOduration=2.153949364 podStartE2EDuration="15.068205429s" podCreationTimestamp="2025-11-25 14:50:33 +0000 UTC" firstStartedPulling="2025-11-25 14:50:34.402899528 +0000 UTC m=+1562.746008952" lastFinishedPulling="2025-11-25 14:50:47.317155593 +0000 UTC m=+1575.660265017" observedRunningTime="2025-11-25 14:50:48.06313827 +0000 UTC m=+1576.406247704" watchObservedRunningTime="2025-11-25 14:50:48.068205429 +0000 UTC m=+1576.411314883" Nov 25 14:51:00 crc kubenswrapper[4796]: I1125 14:51:00.410604 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:51:00 crc kubenswrapper[4796]: E1125 14:51:00.411452 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:51:01 crc kubenswrapper[4796]: I1125 14:51:01.161662 4796 generic.go:334] "Generic (PLEG): container finished" podID="6699babf-2b9f-432c-b0fd-60452bb9ad6b" containerID="09d6dc2713f036ab3635d8dff6831b301f34fac8b821940df47bf133f265c0ce" exitCode=0 Nov 25 14:51:01 crc kubenswrapper[4796]: I1125 14:51:01.161726 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" event={"ID":"6699babf-2b9f-432c-b0fd-60452bb9ad6b","Type":"ContainerDied","Data":"09d6dc2713f036ab3635d8dff6831b301f34fac8b821940df47bf133f265c0ce"} Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.609589 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.807785 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9z8k\" (UniqueName: \"kubernetes.io/projected/6699babf-2b9f-432c-b0fd-60452bb9ad6b-kube-api-access-d9z8k\") pod \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.807930 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-repo-setup-combined-ca-bundle\") pod \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.807979 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-inventory\") pod \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.808029 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-ssh-key\") pod \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\" (UID: \"6699babf-2b9f-432c-b0fd-60452bb9ad6b\") " Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.813458 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6699babf-2b9f-432c-b0fd-60452bb9ad6b-kube-api-access-d9z8k" (OuterVolumeSpecName: "kube-api-access-d9z8k") pod "6699babf-2b9f-432c-b0fd-60452bb9ad6b" (UID: "6699babf-2b9f-432c-b0fd-60452bb9ad6b"). InnerVolumeSpecName "kube-api-access-d9z8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.814852 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6699babf-2b9f-432c-b0fd-60452bb9ad6b" (UID: "6699babf-2b9f-432c-b0fd-60452bb9ad6b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.838422 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6699babf-2b9f-432c-b0fd-60452bb9ad6b" (UID: "6699babf-2b9f-432c-b0fd-60452bb9ad6b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.856458 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-inventory" (OuterVolumeSpecName: "inventory") pod "6699babf-2b9f-432c-b0fd-60452bb9ad6b" (UID: "6699babf-2b9f-432c-b0fd-60452bb9ad6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.911747 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.911801 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.911815 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9z8k\" (UniqueName: \"kubernetes.io/projected/6699babf-2b9f-432c-b0fd-60452bb9ad6b-kube-api-access-d9z8k\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:02 crc kubenswrapper[4796]: I1125 14:51:02.911829 4796 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699babf-2b9f-432c-b0fd-60452bb9ad6b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.183485 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" event={"ID":"6699babf-2b9f-432c-b0fd-60452bb9ad6b","Type":"ContainerDied","Data":"02ffeee18b422df4fe908def57d4ef3a95c6b8570c9d7ebfda15fbba4081842c"} Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.183534 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ffeee18b422df4fe908def57d4ef3a95c6b8570c9d7ebfda15fbba4081842c" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.183680 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.323383 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv"] Nov 25 14:51:03 crc kubenswrapper[4796]: E1125 14:51:03.324167 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6699babf-2b9f-432c-b0fd-60452bb9ad6b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.324203 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6699babf-2b9f-432c-b0fd-60452bb9ad6b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.324438 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6699babf-2b9f-432c-b0fd-60452bb9ad6b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.325517 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.330295 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.330654 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.330718 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.331376 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.346884 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv"] Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.522524 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdg2j\" (UniqueName: \"kubernetes.io/projected/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-kube-api-access-bdg2j\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8rjcv\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.522977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8rjcv\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.523133 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8rjcv\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.624890 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdg2j\" (UniqueName: \"kubernetes.io/projected/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-kube-api-access-bdg2j\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8rjcv\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.625059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8rjcv\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.625134 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8rjcv\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.630834 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8rjcv\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.632788 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8rjcv\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.646566 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdg2j\" (UniqueName: \"kubernetes.io/projected/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-kube-api-access-bdg2j\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8rjcv\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:03 crc kubenswrapper[4796]: I1125 14:51:03.666990 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:04 crc kubenswrapper[4796]: I1125 14:51:04.245293 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv"] Nov 25 14:51:05 crc kubenswrapper[4796]: I1125 14:51:05.210326 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" event={"ID":"3fc16f66-6859-4f61-bdbb-7deaf5ec6831","Type":"ContainerStarted","Data":"34946d6f642d3ab74672bd2f3afa56994ddfc1d74b2bc344ea464bb53dcfc203"} Nov 25 14:51:05 crc kubenswrapper[4796]: I1125 14:51:05.210947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" event={"ID":"3fc16f66-6859-4f61-bdbb-7deaf5ec6831","Type":"ContainerStarted","Data":"e6881c763cd0daf0b3239b93732228c7a12dd576fbc312ec8dca6270f1008b16"} Nov 25 14:51:05 crc kubenswrapper[4796]: I1125 14:51:05.230183 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" podStartSLOduration=1.7825544660000001 podStartE2EDuration="2.230157623s" podCreationTimestamp="2025-11-25 14:51:03 +0000 UTC" firstStartedPulling="2025-11-25 14:51:04.240558521 +0000 UTC m=+1592.583667935" lastFinishedPulling="2025-11-25 14:51:04.688161668 +0000 UTC m=+1593.031271092" observedRunningTime="2025-11-25 14:51:05.225949601 +0000 UTC m=+1593.569059025" watchObservedRunningTime="2025-11-25 14:51:05.230157623 +0000 UTC m=+1593.573267037" Nov 25 14:51:08 crc kubenswrapper[4796]: I1125 14:51:08.240683 4796 generic.go:334] "Generic (PLEG): container finished" podID="3fc16f66-6859-4f61-bdbb-7deaf5ec6831" containerID="34946d6f642d3ab74672bd2f3afa56994ddfc1d74b2bc344ea464bb53dcfc203" exitCode=0 Nov 25 14:51:08 crc kubenswrapper[4796]: I1125 14:51:08.241149 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" event={"ID":"3fc16f66-6859-4f61-bdbb-7deaf5ec6831","Type":"ContainerDied","Data":"34946d6f642d3ab74672bd2f3afa56994ddfc1d74b2bc344ea464bb53dcfc203"} Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.718494 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.854209 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdg2j\" (UniqueName: \"kubernetes.io/projected/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-kube-api-access-bdg2j\") pod \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.854370 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-ssh-key\") pod \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.854442 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-inventory\") pod \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\" (UID: \"3fc16f66-6859-4f61-bdbb-7deaf5ec6831\") " Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.863746 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-kube-api-access-bdg2j" (OuterVolumeSpecName: "kube-api-access-bdg2j") pod "3fc16f66-6859-4f61-bdbb-7deaf5ec6831" (UID: "3fc16f66-6859-4f61-bdbb-7deaf5ec6831"). InnerVolumeSpecName "kube-api-access-bdg2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.895625 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3fc16f66-6859-4f61-bdbb-7deaf5ec6831" (UID: "3fc16f66-6859-4f61-bdbb-7deaf5ec6831"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.904057 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-inventory" (OuterVolumeSpecName: "inventory") pod "3fc16f66-6859-4f61-bdbb-7deaf5ec6831" (UID: "3fc16f66-6859-4f61-bdbb-7deaf5ec6831"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.990035 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.990057 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:09 crc kubenswrapper[4796]: I1125 14:51:09.990069 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdg2j\" (UniqueName: \"kubernetes.io/projected/3fc16f66-6859-4f61-bdbb-7deaf5ec6831-kube-api-access-bdg2j\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.264483 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" event={"ID":"3fc16f66-6859-4f61-bdbb-7deaf5ec6831","Type":"ContainerDied","Data":"e6881c763cd0daf0b3239b93732228c7a12dd576fbc312ec8dca6270f1008b16"} Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.264528 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8rjcv" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.264557 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6881c763cd0daf0b3239b93732228c7a12dd576fbc312ec8dca6270f1008b16" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.338474 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94"] Nov 25 14:51:10 crc kubenswrapper[4796]: E1125 14:51:10.338906 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc16f66-6859-4f61-bdbb-7deaf5ec6831" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.338925 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc16f66-6859-4f61-bdbb-7deaf5ec6831" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.339129 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc16f66-6859-4f61-bdbb-7deaf5ec6831" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.339978 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.341825 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.341912 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.342097 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.343096 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.352837 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94"] Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.396804 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wgk\" (UniqueName: \"kubernetes.io/projected/e06f3673-5956-425d-aefa-270976a3804d-kube-api-access-j8wgk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.396964 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.397012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.397040 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.499142 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.499238 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.499269 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.499356 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8wgk\" (UniqueName: \"kubernetes.io/projected/e06f3673-5956-425d-aefa-270976a3804d-kube-api-access-j8wgk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.505663 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.509220 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.510564 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.520654 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8wgk\" (UniqueName: \"kubernetes.io/projected/e06f3673-5956-425d-aefa-270976a3804d-kube-api-access-j8wgk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:10 crc kubenswrapper[4796]: I1125 14:51:10.661794 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:51:11 crc kubenswrapper[4796]: I1125 14:51:11.204730 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94"] Nov 25 14:51:11 crc kubenswrapper[4796]: I1125 14:51:11.276133 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" event={"ID":"e06f3673-5956-425d-aefa-270976a3804d","Type":"ContainerStarted","Data":"4acb5a7e7b75e88e0b1c105d5fb71c917f52f2eff7176187ff456aa658f01ee8"} Nov 25 14:51:12 crc kubenswrapper[4796]: I1125 14:51:12.293358 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" event={"ID":"e06f3673-5956-425d-aefa-270976a3804d","Type":"ContainerStarted","Data":"710716c2397c603477d7e22d41aad23a86e1e9e1a0e721467450b7ac27f416f9"} Nov 25 14:51:14 crc kubenswrapper[4796]: I1125 14:51:14.410290 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:51:14 crc kubenswrapper[4796]: E1125 14:51:14.411047 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.338415 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" podStartSLOduration=14.868981904 podStartE2EDuration="15.338396525s" podCreationTimestamp="2025-11-25 14:51:10 +0000 UTC" firstStartedPulling="2025-11-25 14:51:11.213939436 +0000 UTC m=+1599.557048860" lastFinishedPulling="2025-11-25 14:51:11.683354057 +0000 UTC m=+1600.026463481" observedRunningTime="2025-11-25 14:51:12.315500796 +0000 UTC m=+1600.658610240" watchObservedRunningTime="2025-11-25 14:51:25.338396525 +0000 UTC m=+1613.681505959" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.340597 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q7q4w"] Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.343034 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.361429 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7q4w"] Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.409600 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:51:25 crc kubenswrapper[4796]: E1125 14:51:25.409948 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.509817 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-catalog-content\") pod \"certified-operators-q7q4w\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.510083 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvd4q\" (UniqueName: \"kubernetes.io/projected/81ed803b-ff54-413e-9042-1fbe72426085-kube-api-access-gvd4q\") pod \"certified-operators-q7q4w\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.510243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-utilities\") pod \"certified-operators-q7q4w\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.612283 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-utilities\") pod \"certified-operators-q7q4w\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.612462 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-catalog-content\") pod \"certified-operators-q7q4w\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.612497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvd4q\" (UniqueName: \"kubernetes.io/projected/81ed803b-ff54-413e-9042-1fbe72426085-kube-api-access-gvd4q\") pod \"certified-operators-q7q4w\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.613005 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-utilities\") pod \"certified-operators-q7q4w\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.613053 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-catalog-content\") pod \"certified-operators-q7q4w\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.631800 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvd4q\" (UniqueName: \"kubernetes.io/projected/81ed803b-ff54-413e-9042-1fbe72426085-kube-api-access-gvd4q\") pod \"certified-operators-q7q4w\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:25 crc kubenswrapper[4796]: I1125 14:51:25.699421 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:26 crc kubenswrapper[4796]: W1125 14:51:26.222700 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81ed803b_ff54_413e_9042_1fbe72426085.slice/crio-a3b60731ed69345dfdbdd52ef9b56af23c92ab7f0246bfc87538ff4be1adc9b9 WatchSource:0}: Error finding container a3b60731ed69345dfdbdd52ef9b56af23c92ab7f0246bfc87538ff4be1adc9b9: Status 404 returned error can't find the container with id a3b60731ed69345dfdbdd52ef9b56af23c92ab7f0246bfc87538ff4be1adc9b9 Nov 25 14:51:26 crc kubenswrapper[4796]: I1125 14:51:26.224440 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7q4w"] Nov 25 14:51:26 crc kubenswrapper[4796]: I1125 14:51:26.473948 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7q4w" event={"ID":"81ed803b-ff54-413e-9042-1fbe72426085","Type":"ContainerStarted","Data":"a3b60731ed69345dfdbdd52ef9b56af23c92ab7f0246bfc87538ff4be1adc9b9"} Nov 25 14:51:27 crc kubenswrapper[4796]: I1125 14:51:27.485952 4796 generic.go:334] "Generic (PLEG): container finished" podID="81ed803b-ff54-413e-9042-1fbe72426085" containerID="568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03" exitCode=0 Nov 25 14:51:27 crc kubenswrapper[4796]: I1125 14:51:27.486038 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7q4w" event={"ID":"81ed803b-ff54-413e-9042-1fbe72426085","Type":"ContainerDied","Data":"568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03"} Nov 25 14:51:28 crc kubenswrapper[4796]: I1125 14:51:28.499426 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7q4w" event={"ID":"81ed803b-ff54-413e-9042-1fbe72426085","Type":"ContainerStarted","Data":"fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124"} Nov 25 14:51:29 crc kubenswrapper[4796]: I1125 14:51:29.519139 4796 generic.go:334] "Generic (PLEG): container finished" podID="81ed803b-ff54-413e-9042-1fbe72426085" containerID="fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124" exitCode=0 Nov 25 14:51:29 crc kubenswrapper[4796]: I1125 14:51:29.519215 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7q4w" event={"ID":"81ed803b-ff54-413e-9042-1fbe72426085","Type":"ContainerDied","Data":"fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124"} Nov 25 14:51:30 crc kubenswrapper[4796]: I1125 14:51:30.531996 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7q4w" event={"ID":"81ed803b-ff54-413e-9042-1fbe72426085","Type":"ContainerStarted","Data":"525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2"} Nov 25 14:51:30 crc kubenswrapper[4796]: I1125 14:51:30.555238 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q7q4w" podStartSLOduration=3.090132013 podStartE2EDuration="5.555222306s" podCreationTimestamp="2025-11-25 14:51:25 +0000 UTC" firstStartedPulling="2025-11-25 14:51:27.488748051 +0000 UTC m=+1615.831857495" lastFinishedPulling="2025-11-25 14:51:29.953838374 +0000 UTC m=+1618.296947788" observedRunningTime="2025-11-25 14:51:30.552845732 +0000 UTC m=+1618.895955176" watchObservedRunningTime="2025-11-25 14:51:30.555222306 +0000 UTC m=+1618.898331730" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.249642 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5zljj"] Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.253224 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.263818 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zljj"] Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.381669 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-catalog-content\") pod \"community-operators-5zljj\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.381761 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4scmx\" (UniqueName: \"kubernetes.io/projected/fd359be4-d9ea-4590-93b1-f75dc900852f-kube-api-access-4scmx\") pod \"community-operators-5zljj\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.381792 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-utilities\") pod \"community-operators-5zljj\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.483752 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-catalog-content\") pod \"community-operators-5zljj\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.483835 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4scmx\" (UniqueName: \"kubernetes.io/projected/fd359be4-d9ea-4590-93b1-f75dc900852f-kube-api-access-4scmx\") pod \"community-operators-5zljj\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.483868 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-utilities\") pod \"community-operators-5zljj\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.484562 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-utilities\") pod \"community-operators-5zljj\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.484606 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-catalog-content\") pod \"community-operators-5zljj\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.503990 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4scmx\" (UniqueName: \"kubernetes.io/projected/fd359be4-d9ea-4590-93b1-f75dc900852f-kube-api-access-4scmx\") pod \"community-operators-5zljj\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:33 crc kubenswrapper[4796]: I1125 14:51:33.582433 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:34 crc kubenswrapper[4796]: I1125 14:51:34.119029 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zljj"] Nov 25 14:51:34 crc kubenswrapper[4796]: I1125 14:51:34.569937 4796 generic.go:334] "Generic (PLEG): container finished" podID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerID="82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176" exitCode=0 Nov 25 14:51:34 crc kubenswrapper[4796]: I1125 14:51:34.570004 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zljj" event={"ID":"fd359be4-d9ea-4590-93b1-f75dc900852f","Type":"ContainerDied","Data":"82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176"} Nov 25 14:51:34 crc kubenswrapper[4796]: I1125 14:51:34.570254 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zljj" event={"ID":"fd359be4-d9ea-4590-93b1-f75dc900852f","Type":"ContainerStarted","Data":"7f73d739db39bc159758a91905888323e77ed18d36d902a413edf32ef52c1ed4"} Nov 25 14:51:35 crc kubenswrapper[4796]: I1125 14:51:35.699646 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:35 crc kubenswrapper[4796]: I1125 14:51:35.700200 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:35 crc kubenswrapper[4796]: I1125 14:51:35.761326 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:36 crc kubenswrapper[4796]: I1125 14:51:36.594005 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zljj" event={"ID":"fd359be4-d9ea-4590-93b1-f75dc900852f","Type":"ContainerStarted","Data":"89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576"} Nov 25 14:51:36 crc kubenswrapper[4796]: I1125 14:51:36.643529 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:37 crc kubenswrapper[4796]: I1125 14:51:37.112795 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7q4w"] Nov 25 14:51:37 crc kubenswrapper[4796]: I1125 14:51:37.410081 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:51:37 crc kubenswrapper[4796]: E1125 14:51:37.410630 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:51:38 crc kubenswrapper[4796]: I1125 14:51:38.619773 4796 generic.go:334] "Generic (PLEG): container finished" podID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerID="89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576" exitCode=0 Nov 25 14:51:38 crc kubenswrapper[4796]: I1125 14:51:38.619948 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zljj" event={"ID":"fd359be4-d9ea-4590-93b1-f75dc900852f","Type":"ContainerDied","Data":"89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576"} Nov 25 14:51:38 crc kubenswrapper[4796]: I1125 14:51:38.620336 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q7q4w" podUID="81ed803b-ff54-413e-9042-1fbe72426085" containerName="registry-server" containerID="cri-o://525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2" gracePeriod=2 Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.090373 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.203527 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-catalog-content\") pod \"81ed803b-ff54-413e-9042-1fbe72426085\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.203566 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvd4q\" (UniqueName: \"kubernetes.io/projected/81ed803b-ff54-413e-9042-1fbe72426085-kube-api-access-gvd4q\") pod \"81ed803b-ff54-413e-9042-1fbe72426085\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.203628 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-utilities\") pod \"81ed803b-ff54-413e-9042-1fbe72426085\" (UID: \"81ed803b-ff54-413e-9042-1fbe72426085\") " Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.204613 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-utilities" (OuterVolumeSpecName: "utilities") pod "81ed803b-ff54-413e-9042-1fbe72426085" (UID: "81ed803b-ff54-413e-9042-1fbe72426085"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.209658 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ed803b-ff54-413e-9042-1fbe72426085-kube-api-access-gvd4q" (OuterVolumeSpecName: "kube-api-access-gvd4q") pod "81ed803b-ff54-413e-9042-1fbe72426085" (UID: "81ed803b-ff54-413e-9042-1fbe72426085"). InnerVolumeSpecName "kube-api-access-gvd4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.262843 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81ed803b-ff54-413e-9042-1fbe72426085" (UID: "81ed803b-ff54-413e-9042-1fbe72426085"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.305920 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.305958 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvd4q\" (UniqueName: \"kubernetes.io/projected/81ed803b-ff54-413e-9042-1fbe72426085-kube-api-access-gvd4q\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.305972 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ed803b-ff54-413e-9042-1fbe72426085-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.637497 4796 generic.go:334] "Generic (PLEG): container finished" podID="81ed803b-ff54-413e-9042-1fbe72426085" containerID="525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2" exitCode=0 Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.637546 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7q4w" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.637567 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7q4w" event={"ID":"81ed803b-ff54-413e-9042-1fbe72426085","Type":"ContainerDied","Data":"525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2"} Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.638670 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7q4w" event={"ID":"81ed803b-ff54-413e-9042-1fbe72426085","Type":"ContainerDied","Data":"a3b60731ed69345dfdbdd52ef9b56af23c92ab7f0246bfc87538ff4be1adc9b9"} Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.638700 4796 scope.go:117] "RemoveContainer" containerID="525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.646487 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zljj" event={"ID":"fd359be4-d9ea-4590-93b1-f75dc900852f","Type":"ContainerStarted","Data":"7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489"} Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.659544 4796 scope.go:117] "RemoveContainer" containerID="fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.667930 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5zljj" podStartSLOduration=1.988891255 podStartE2EDuration="6.667912001s" podCreationTimestamp="2025-11-25 14:51:33 +0000 UTC" firstStartedPulling="2025-11-25 14:51:34.572266626 +0000 UTC m=+1622.915376060" lastFinishedPulling="2025-11-25 14:51:39.251287382 +0000 UTC m=+1627.594396806" observedRunningTime="2025-11-25 14:51:39.665995691 +0000 UTC m=+1628.009105115" watchObservedRunningTime="2025-11-25 14:51:39.667912001 +0000 UTC m=+1628.011021445" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.698225 4796 scope.go:117] "RemoveContainer" containerID="568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.699474 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7q4w"] Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.707637 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q7q4w"] Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.717600 4796 scope.go:117] "RemoveContainer" containerID="525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2" Nov 25 14:51:39 crc kubenswrapper[4796]: E1125 14:51:39.717976 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2\": container with ID starting with 525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2 not found: ID does not exist" containerID="525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.718008 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2"} err="failed to get container status \"525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2\": rpc error: code = NotFound desc = could not find container \"525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2\": container with ID starting with 525eb6a9edf5f3500204623422bac8d491c7c426ded98aa42960f3b35ca066c2 not found: ID does not exist" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.718028 4796 scope.go:117] "RemoveContainer" containerID="fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124" Nov 25 14:51:39 crc kubenswrapper[4796]: E1125 14:51:39.718312 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124\": container with ID starting with fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124 not found: ID does not exist" containerID="fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.718342 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124"} err="failed to get container status \"fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124\": rpc error: code = NotFound desc = could not find container \"fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124\": container with ID starting with fde0558efc93e1daf53809cda4116dff1e831f4fa2eeef80986ab93b9761e124 not found: ID does not exist" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.718359 4796 scope.go:117] "RemoveContainer" containerID="568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03" Nov 25 14:51:39 crc kubenswrapper[4796]: E1125 14:51:39.718566 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03\": container with ID starting with 568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03 not found: ID does not exist" containerID="568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03" Nov 25 14:51:39 crc kubenswrapper[4796]: I1125 14:51:39.718597 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03"} err="failed to get container status \"568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03\": rpc error: code = NotFound desc = could not find container \"568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03\": container with ID starting with 568c793b7a8508a743151c7d264ffd9223b809416a92dcad508b73acce09be03 not found: ID does not exist" Nov 25 14:51:40 crc kubenswrapper[4796]: I1125 14:51:40.428931 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ed803b-ff54-413e-9042-1fbe72426085" path="/var/lib/kubelet/pods/81ed803b-ff54-413e-9042-1fbe72426085/volumes" Nov 25 14:51:43 crc kubenswrapper[4796]: I1125 14:51:43.582646 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:43 crc kubenswrapper[4796]: I1125 14:51:43.583014 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:44 crc kubenswrapper[4796]: I1125 14:51:44.636657 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5zljj" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerName="registry-server" probeResult="failure" output=< Nov 25 14:51:44 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Nov 25 14:51:44 crc kubenswrapper[4796]: > Nov 25 14:51:45 crc kubenswrapper[4796]: I1125 14:51:45.780761 4796 scope.go:117] "RemoveContainer" containerID="043b186c4b59ec956c6acb2750a19721e8aeeb3f5ce985bffffd8f6878b862e2" Nov 25 14:51:49 crc kubenswrapper[4796]: I1125 14:51:49.409875 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:51:49 crc kubenswrapper[4796]: E1125 14:51:49.410650 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:51:53 crc kubenswrapper[4796]: I1125 14:51:53.657557 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:53 crc kubenswrapper[4796]: I1125 14:51:53.722076 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:53 crc kubenswrapper[4796]: I1125 14:51:53.901745 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zljj"] Nov 25 14:51:54 crc kubenswrapper[4796]: I1125 14:51:54.788506 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5zljj" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerName="registry-server" containerID="cri-o://7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489" gracePeriod=2 Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.343404 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.414500 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4scmx\" (UniqueName: \"kubernetes.io/projected/fd359be4-d9ea-4590-93b1-f75dc900852f-kube-api-access-4scmx\") pod \"fd359be4-d9ea-4590-93b1-f75dc900852f\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.414632 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-catalog-content\") pod \"fd359be4-d9ea-4590-93b1-f75dc900852f\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.414719 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-utilities\") pod \"fd359be4-d9ea-4590-93b1-f75dc900852f\" (UID: \"fd359be4-d9ea-4590-93b1-f75dc900852f\") " Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.415489 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-utilities" (OuterVolumeSpecName: "utilities") pod "fd359be4-d9ea-4590-93b1-f75dc900852f" (UID: "fd359be4-d9ea-4590-93b1-f75dc900852f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.420873 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd359be4-d9ea-4590-93b1-f75dc900852f-kube-api-access-4scmx" (OuterVolumeSpecName: "kube-api-access-4scmx") pod "fd359be4-d9ea-4590-93b1-f75dc900852f" (UID: "fd359be4-d9ea-4590-93b1-f75dc900852f"). InnerVolumeSpecName "kube-api-access-4scmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.463314 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd359be4-d9ea-4590-93b1-f75dc900852f" (UID: "fd359be4-d9ea-4590-93b1-f75dc900852f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.517014 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4scmx\" (UniqueName: \"kubernetes.io/projected/fd359be4-d9ea-4590-93b1-f75dc900852f-kube-api-access-4scmx\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.517060 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.517073 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd359be4-d9ea-4590-93b1-f75dc900852f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.809076 4796 generic.go:334] "Generic (PLEG): container finished" podID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerID="7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489" exitCode=0 Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.809155 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zljj" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.810198 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zljj" event={"ID":"fd359be4-d9ea-4590-93b1-f75dc900852f","Type":"ContainerDied","Data":"7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489"} Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.810330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zljj" event={"ID":"fd359be4-d9ea-4590-93b1-f75dc900852f","Type":"ContainerDied","Data":"7f73d739db39bc159758a91905888323e77ed18d36d902a413edf32ef52c1ed4"} Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.810404 4796 scope.go:117] "RemoveContainer" containerID="7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.840404 4796 scope.go:117] "RemoveContainer" containerID="89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.849028 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zljj"] Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.860499 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5zljj"] Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.872038 4796 scope.go:117] "RemoveContainer" containerID="82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.929283 4796 scope.go:117] "RemoveContainer" containerID="7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489" Nov 25 14:51:55 crc kubenswrapper[4796]: E1125 14:51:55.930289 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489\": container with ID starting with 7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489 not found: ID does not exist" containerID="7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.930361 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489"} err="failed to get container status \"7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489\": rpc error: code = NotFound desc = could not find container \"7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489\": container with ID starting with 7a57ae73b41a40f03d374727dde2e99f59581cf2c7d864f32d450f2880ce8489 not found: ID does not exist" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.930396 4796 scope.go:117] "RemoveContainer" containerID="89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576" Nov 25 14:51:55 crc kubenswrapper[4796]: E1125 14:51:55.930858 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576\": container with ID starting with 89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576 not found: ID does not exist" containerID="89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.930905 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576"} err="failed to get container status \"89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576\": rpc error: code = NotFound desc = could not find container \"89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576\": container with ID starting with 89d47a9f77aea5fd299b558b7354cde072fa83dc4b487001e45ce8b8be6b5576 not found: ID does not exist" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.930934 4796 scope.go:117] "RemoveContainer" containerID="82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176" Nov 25 14:51:55 crc kubenswrapper[4796]: E1125 14:51:55.931287 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176\": container with ID starting with 82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176 not found: ID does not exist" containerID="82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176" Nov 25 14:51:55 crc kubenswrapper[4796]: I1125 14:51:55.931315 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176"} err="failed to get container status \"82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176\": rpc error: code = NotFound desc = could not find container \"82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176\": container with ID starting with 82ceb457f8f9dcdefba0fc958ed5bab61a3891b411b04ee998950d65afc38176 not found: ID does not exist" Nov 25 14:51:56 crc kubenswrapper[4796]: I1125 14:51:56.426739 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" path="/var/lib/kubelet/pods/fd359be4-d9ea-4590-93b1-f75dc900852f/volumes" Nov 25 14:52:02 crc kubenswrapper[4796]: I1125 14:52:02.416370 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:52:02 crc kubenswrapper[4796]: E1125 14:52:02.417109 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:52:17 crc kubenswrapper[4796]: I1125 14:52:17.409745 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:52:17 crc kubenswrapper[4796]: E1125 14:52:17.411061 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:52:28 crc kubenswrapper[4796]: I1125 14:52:28.409742 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:52:28 crc kubenswrapper[4796]: E1125 14:52:28.411197 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:52:43 crc kubenswrapper[4796]: I1125 14:52:43.409289 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:52:43 crc kubenswrapper[4796]: E1125 14:52:43.411659 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:52:45 crc kubenswrapper[4796]: I1125 14:52:45.875144 4796 scope.go:117] "RemoveContainer" containerID="11b12a44fb12af68b288b537b801c1396328386b617bea8abdea96586dfce0b0" Nov 25 14:52:46 crc kubenswrapper[4796]: I1125 14:52:46.073377 4796 scope.go:117] "RemoveContainer" containerID="ae453e3aaa7cbba30fd5bc3de23897fd5dc332bf3b291917085c8ce4126081c4" Nov 25 14:52:55 crc kubenswrapper[4796]: I1125 14:52:55.409230 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:52:55 crc kubenswrapper[4796]: E1125 14:52:55.410186 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:53:10 crc kubenswrapper[4796]: I1125 14:53:10.409820 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:53:10 crc kubenswrapper[4796]: E1125 14:53:10.410562 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:53:24 crc kubenswrapper[4796]: I1125 14:53:24.410422 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:53:24 crc kubenswrapper[4796]: E1125 14:53:24.411230 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:53:36 crc kubenswrapper[4796]: I1125 14:53:36.408930 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:53:36 crc kubenswrapper[4796]: E1125 14:53:36.409730 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:53:38 crc kubenswrapper[4796]: I1125 14:53:38.063849 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wvj8r"] Nov 25 14:53:38 crc kubenswrapper[4796]: I1125 14:53:38.080096 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6761-account-create-t2jhq"] Nov 25 14:53:38 crc kubenswrapper[4796]: I1125 14:53:38.090621 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-ggn22"] Nov 25 14:53:38 crc kubenswrapper[4796]: I1125 14:53:38.098740 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6761-account-create-t2jhq"] Nov 25 14:53:38 crc kubenswrapper[4796]: I1125 14:53:38.106798 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wvj8r"] Nov 25 14:53:38 crc kubenswrapper[4796]: I1125 14:53:38.114434 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-ggn22"] Nov 25 14:53:38 crc kubenswrapper[4796]: I1125 14:53:38.421330 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23eee6d0-1c05-4c07-a956-888ec367e90a" path="/var/lib/kubelet/pods/23eee6d0-1c05-4c07-a956-888ec367e90a/volumes" Nov 25 14:53:38 crc kubenswrapper[4796]: I1125 14:53:38.422131 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006" path="/var/lib/kubelet/pods/26b3fe3d-2dfe-43c3-a2cd-a98b8d54a006/volumes" Nov 25 14:53:38 crc kubenswrapper[4796]: I1125 14:53:38.422903 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d08ad353-8375-4a85-a3ed-66bc9d869e5c" path="/var/lib/kubelet/pods/d08ad353-8375-4a85-a3ed-66bc9d869e5c/volumes" Nov 25 14:53:39 crc kubenswrapper[4796]: I1125 14:53:39.034951 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-11c0-account-create-9l6dt"] Nov 25 14:53:39 crc kubenswrapper[4796]: I1125 14:53:39.045877 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-11c0-account-create-9l6dt"] Nov 25 14:53:40 crc kubenswrapper[4796]: I1125 14:53:40.429966 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec9638d-06b3-491b-895c-a3c306acddb5" path="/var/lib/kubelet/pods/dec9638d-06b3-491b-895c-a3c306acddb5/volumes" Nov 25 14:53:43 crc kubenswrapper[4796]: I1125 14:53:43.044129 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-5bd8-account-create-m8848"] Nov 25 14:53:43 crc kubenswrapper[4796]: I1125 14:53:43.054269 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-5bd8-account-create-m8848"] Nov 25 14:53:44 crc kubenswrapper[4796]: I1125 14:53:44.031545 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4lg86"] Nov 25 14:53:44 crc kubenswrapper[4796]: I1125 14:53:44.045963 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4lg86"] Nov 25 14:53:44 crc kubenswrapper[4796]: I1125 14:53:44.426854 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14330b10-0b24-42a5-a682-cbc7cdb4a546" path="/var/lib/kubelet/pods/14330b10-0b24-42a5-a682-cbc7cdb4a546/volumes" Nov 25 14:53:44 crc kubenswrapper[4796]: I1125 14:53:44.427668 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef276b07-ccd1-4f2d-ab5f-b7208745b3e8" path="/var/lib/kubelet/pods/ef276b07-ccd1-4f2d-ab5f-b7208745b3e8/volumes" Nov 25 14:53:46 crc kubenswrapper[4796]: I1125 14:53:46.150197 4796 scope.go:117] "RemoveContainer" containerID="b9832b7c4b7dd6978820e1544fc8aad479fe066db647a92c7f5b539c1f1f7e1f" Nov 25 14:53:46 crc kubenswrapper[4796]: I1125 14:53:46.186655 4796 scope.go:117] "RemoveContainer" containerID="d0b4a45686e2d526926edae22b46ee125ee8cb7172e2d45de5986dca16ba07ce" Nov 25 14:53:46 crc kubenswrapper[4796]: I1125 14:53:46.249710 4796 scope.go:117] "RemoveContainer" containerID="3629dccbf558507c73ce3bf836e48149f30db2720b061acc8e139b98121b3323" Nov 25 14:53:46 crc kubenswrapper[4796]: I1125 14:53:46.305494 4796 scope.go:117] "RemoveContainer" containerID="10ade1f6739c4fdacf0cb2f32a5d4429a897b4a5447c2d636bad8757aa5cc408" Nov 25 14:53:46 crc kubenswrapper[4796]: I1125 14:53:46.339782 4796 scope.go:117] "RemoveContainer" containerID="ab89f5c16b7e93a7afc5b1a6221b5fbdb4328e8c12d2dd1c453fe6dabd1aeaee" Nov 25 14:53:46 crc kubenswrapper[4796]: I1125 14:53:46.402225 4796 scope.go:117] "RemoveContainer" containerID="73ff7fb769c5ef04d811215f7acd39f4a2df2642b045f452f9afe986677bda3f" Nov 25 14:53:50 crc kubenswrapper[4796]: I1125 14:53:50.409839 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:53:50 crc kubenswrapper[4796]: E1125 14:53:50.437238 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:54:02 crc kubenswrapper[4796]: I1125 14:54:02.418340 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:54:02 crc kubenswrapper[4796]: E1125 14:54:02.419277 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:54:04 crc kubenswrapper[4796]: I1125 14:54:04.043239 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3c99-account-create-vmvq6"] Nov 25 14:54:04 crc kubenswrapper[4796]: I1125 14:54:04.052519 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3c99-account-create-vmvq6"] Nov 25 14:54:04 crc kubenswrapper[4796]: I1125 14:54:04.429302 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6107e4d3-3da4-4db6-9ec5-501f1b44c37c" path="/var/lib/kubelet/pods/6107e4d3-3da4-4db6-9ec5-501f1b44c37c/volumes" Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.037557 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-g8hc4"] Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.050268 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cfce-account-create-ss8dd"] Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.060443 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a7c2-account-create-w6tcb"] Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.070659 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-p2wm7"] Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.081653 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-g8hc4"] Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.099350 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-95mnt"] Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.110925 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cfce-account-create-ss8dd"] Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.122142 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-p2wm7"] Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.132949 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a7c2-account-create-w6tcb"] Nov 25 14:54:07 crc kubenswrapper[4796]: I1125 14:54:07.147540 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-95mnt"] Nov 25 14:54:08 crc kubenswrapper[4796]: I1125 14:54:08.420475 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2739db56-54ae-4a4d-8941-5d27d9fbbd85" path="/var/lib/kubelet/pods/2739db56-54ae-4a4d-8941-5d27d9fbbd85/volumes" Nov 25 14:54:08 crc kubenswrapper[4796]: I1125 14:54:08.421627 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33029dfd-906d-425d-8266-d87ea1af419b" path="/var/lib/kubelet/pods/33029dfd-906d-425d-8266-d87ea1af419b/volumes" Nov 25 14:54:08 crc kubenswrapper[4796]: I1125 14:54:08.422277 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d3d4ee-22cb-4ec4-ad98-acfdf570ba21" path="/var/lib/kubelet/pods/51d3d4ee-22cb-4ec4-ad98-acfdf570ba21/volumes" Nov 25 14:54:08 crc kubenswrapper[4796]: I1125 14:54:08.422992 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e2365b-5b83-408e-8b40-59c35b6fcd90" path="/var/lib/kubelet/pods/67e2365b-5b83-408e-8b40-59c35b6fcd90/volumes" Nov 25 14:54:08 crc kubenswrapper[4796]: I1125 14:54:08.424267 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9578a6-b0e4-4efb-ae4b-86cd92008d5e" path="/var/lib/kubelet/pods/be9578a6-b0e4-4efb-ae4b-86cd92008d5e/volumes" Nov 25 14:54:15 crc kubenswrapper[4796]: I1125 14:54:15.410802 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:54:15 crc kubenswrapper[4796]: E1125 14:54:15.411840 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:54:24 crc kubenswrapper[4796]: E1125 14:54:24.076463 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode06f3673_5956_425d_aefa_270976a3804d.slice/crio-710716c2397c603477d7e22d41aad23a86e1e9e1a0e721467450b7ac27f416f9.scope\": RecentStats: unable to find data in memory cache]" Nov 25 14:54:24 crc kubenswrapper[4796]: I1125 14:54:24.397546 4796 generic.go:334] "Generic (PLEG): container finished" podID="e06f3673-5956-425d-aefa-270976a3804d" containerID="710716c2397c603477d7e22d41aad23a86e1e9e1a0e721467450b7ac27f416f9" exitCode=0 Nov 25 14:54:24 crc kubenswrapper[4796]: I1125 14:54:24.397951 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" event={"ID":"e06f3673-5956-425d-aefa-270976a3804d","Type":"ContainerDied","Data":"710716c2397c603477d7e22d41aad23a86e1e9e1a0e721467450b7ac27f416f9"} Nov 25 14:54:25 crc kubenswrapper[4796]: I1125 14:54:25.821395 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:54:25 crc kubenswrapper[4796]: I1125 14:54:25.974913 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8wgk\" (UniqueName: \"kubernetes.io/projected/e06f3673-5956-425d-aefa-270976a3804d-kube-api-access-j8wgk\") pod \"e06f3673-5956-425d-aefa-270976a3804d\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " Nov 25 14:54:25 crc kubenswrapper[4796]: I1125 14:54:25.975114 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-bootstrap-combined-ca-bundle\") pod \"e06f3673-5956-425d-aefa-270976a3804d\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " Nov 25 14:54:25 crc kubenswrapper[4796]: I1125 14:54:25.975276 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-inventory\") pod \"e06f3673-5956-425d-aefa-270976a3804d\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " Nov 25 14:54:25 crc kubenswrapper[4796]: I1125 14:54:25.975381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-ssh-key\") pod \"e06f3673-5956-425d-aefa-270976a3804d\" (UID: \"e06f3673-5956-425d-aefa-270976a3804d\") " Nov 25 14:54:25 crc kubenswrapper[4796]: I1125 14:54:25.980769 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e06f3673-5956-425d-aefa-270976a3804d" (UID: "e06f3673-5956-425d-aefa-270976a3804d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4796]: I1125 14:54:25.986686 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e06f3673-5956-425d-aefa-270976a3804d-kube-api-access-j8wgk" (OuterVolumeSpecName: "kube-api-access-j8wgk") pod "e06f3673-5956-425d-aefa-270976a3804d" (UID: "e06f3673-5956-425d-aefa-270976a3804d"). InnerVolumeSpecName "kube-api-access-j8wgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.017707 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e06f3673-5956-425d-aefa-270976a3804d" (UID: "e06f3673-5956-425d-aefa-270976a3804d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.017719 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-inventory" (OuterVolumeSpecName: "inventory") pod "e06f3673-5956-425d-aefa-270976a3804d" (UID: "e06f3673-5956-425d-aefa-270976a3804d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.078055 4796 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.078090 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.078103 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e06f3673-5956-425d-aefa-270976a3804d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.078114 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8wgk\" (UniqueName: \"kubernetes.io/projected/e06f3673-5956-425d-aefa-270976a3804d-kube-api-access-j8wgk\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.418559 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.420185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94" event={"ID":"e06f3673-5956-425d-aefa-270976a3804d","Type":"ContainerDied","Data":"4acb5a7e7b75e88e0b1c105d5fb71c917f52f2eff7176187ff456aa658f01ee8"} Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.420243 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4acb5a7e7b75e88e0b1c105d5fb71c917f52f2eff7176187ff456aa658f01ee8" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.520836 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c"] Nov 25 14:54:26 crc kubenswrapper[4796]: E1125 14:54:26.521289 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerName="extract-utilities" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521313 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerName="extract-utilities" Nov 25 14:54:26 crc kubenswrapper[4796]: E1125 14:54:26.521335 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerName="registry-server" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521343 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerName="registry-server" Nov 25 14:54:26 crc kubenswrapper[4796]: E1125 14:54:26.521365 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed803b-ff54-413e-9042-1fbe72426085" containerName="extract-content" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521373 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed803b-ff54-413e-9042-1fbe72426085" containerName="extract-content" Nov 25 14:54:26 crc kubenswrapper[4796]: E1125 14:54:26.521392 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed803b-ff54-413e-9042-1fbe72426085" containerName="extract-utilities" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521399 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed803b-ff54-413e-9042-1fbe72426085" containerName="extract-utilities" Nov 25 14:54:26 crc kubenswrapper[4796]: E1125 14:54:26.521414 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerName="extract-content" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521423 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerName="extract-content" Nov 25 14:54:26 crc kubenswrapper[4796]: E1125 14:54:26.521450 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ed803b-ff54-413e-9042-1fbe72426085" containerName="registry-server" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521459 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ed803b-ff54-413e-9042-1fbe72426085" containerName="registry-server" Nov 25 14:54:26 crc kubenswrapper[4796]: E1125 14:54:26.521474 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e06f3673-5956-425d-aefa-270976a3804d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521483 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e06f3673-5956-425d-aefa-270976a3804d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521732 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd359be4-d9ea-4590-93b1-f75dc900852f" containerName="registry-server" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521755 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e06f3673-5956-425d-aefa-270976a3804d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.521779 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ed803b-ff54-413e-9042-1fbe72426085" containerName="registry-server" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.522486 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.525282 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.525663 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.531352 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.533106 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.535294 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c"] Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.693832 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8zhh\" (UniqueName: \"kubernetes.io/projected/9c76afe2-174a-4c31-a551-101661ae546b-kube-api-access-q8zhh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h678c\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.694210 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h678c\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.694552 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h678c\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.797400 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h678c\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.797556 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h678c\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.797736 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8zhh\" (UniqueName: \"kubernetes.io/projected/9c76afe2-174a-4c31-a551-101661ae546b-kube-api-access-q8zhh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h678c\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.808522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h678c\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.814507 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h678c\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.834034 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8zhh\" (UniqueName: \"kubernetes.io/projected/9c76afe2-174a-4c31-a551-101661ae546b-kube-api-access-q8zhh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-h678c\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:26 crc kubenswrapper[4796]: I1125 14:54:26.839089 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:54:27 crc kubenswrapper[4796]: I1125 14:54:27.409217 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c"] Nov 25 14:54:27 crc kubenswrapper[4796]: I1125 14:54:27.421341 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 14:54:27 crc kubenswrapper[4796]: I1125 14:54:27.441330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" event={"ID":"9c76afe2-174a-4c31-a551-101661ae546b","Type":"ContainerStarted","Data":"88420f5a7214f59b333e0f4cc4cb08096896b535db1c6b5b0e34fce3fa13ef3f"} Nov 25 14:54:28 crc kubenswrapper[4796]: I1125 14:54:28.474113 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" event={"ID":"9c76afe2-174a-4c31-a551-101661ae546b","Type":"ContainerStarted","Data":"277691886b5a7cf1ea3d4d2c6d4bd8496cc63b6a05d29d5af50097fc37b161b6"} Nov 25 14:54:28 crc kubenswrapper[4796]: I1125 14:54:28.502889 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" podStartSLOduration=2.058029432 podStartE2EDuration="2.502863012s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="2025-11-25 14:54:27.420960074 +0000 UTC m=+1795.764069508" lastFinishedPulling="2025-11-25 14:54:27.865793654 +0000 UTC m=+1796.208903088" observedRunningTime="2025-11-25 14:54:28.496586336 +0000 UTC m=+1796.839695760" watchObservedRunningTime="2025-11-25 14:54:28.502863012 +0000 UTC m=+1796.845972446" Nov 25 14:54:30 crc kubenswrapper[4796]: I1125 14:54:30.409465 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:54:30 crc kubenswrapper[4796]: E1125 14:54:30.409974 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:54:43 crc kubenswrapper[4796]: I1125 14:54:43.409659 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:54:43 crc kubenswrapper[4796]: E1125 14:54:43.410712 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:54:46 crc kubenswrapper[4796]: I1125 14:54:46.632551 4796 scope.go:117] "RemoveContainer" containerID="224c0e5523c636191053b8b4316851dddedf503ddfc2a5c355675113acbb04d4" Nov 25 14:54:46 crc kubenswrapper[4796]: I1125 14:54:46.659327 4796 scope.go:117] "RemoveContainer" containerID="a415fcb8a66ddea15366e1b0a06ead28f09f623219e55d09bfb932aacb4de794" Nov 25 14:54:46 crc kubenswrapper[4796]: I1125 14:54:46.712953 4796 scope.go:117] "RemoveContainer" containerID="c03ee5fb490755c9b4cc444d5381ccf74fc326170c06df26139faf2ef97af89a" Nov 25 14:54:46 crc kubenswrapper[4796]: I1125 14:54:46.731042 4796 scope.go:117] "RemoveContainer" containerID="f8db123148bdac84a8e586915e7ceabce60f2251f24b77da9589e34572f96ebd" Nov 25 14:54:46 crc kubenswrapper[4796]: I1125 14:54:46.755221 4796 scope.go:117] "RemoveContainer" containerID="b6051fdc8b9ed05f0002d452706c8aa881a72e0bbfadbb66a4254c29f3d43b99" Nov 25 14:54:46 crc kubenswrapper[4796]: I1125 14:54:46.801828 4796 scope.go:117] "RemoveContainer" containerID="1c27b68358aec3bb1d3a2ef1ffd4739013093372451fcc976cab987e7ebbc1a8" Nov 25 14:54:46 crc kubenswrapper[4796]: I1125 14:54:46.843736 4796 scope.go:117] "RemoveContainer" containerID="3de62eeb0cf5474fdc201d0d0ec8d96b79d532d0536dfd8dffb23e69d9739aae" Nov 25 14:54:46 crc kubenswrapper[4796]: I1125 14:54:46.890798 4796 scope.go:117] "RemoveContainer" containerID="1e69a3294841638470b56e6ad1ccfb5caf614f4c65c282b6364ac89e54027846" Nov 25 14:54:46 crc kubenswrapper[4796]: I1125 14:54:46.935072 4796 scope.go:117] "RemoveContainer" containerID="a4aaeb2e8eadd4938a7265faf28c9f569b1f96a0ed61e46d7834f8910c05b4f4" Nov 25 14:54:55 crc kubenswrapper[4796]: I1125 14:54:55.409786 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:54:55 crc kubenswrapper[4796]: E1125 14:54:55.410394 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:55:08 crc kubenswrapper[4796]: I1125 14:55:08.409934 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:55:08 crc kubenswrapper[4796]: E1125 14:55:08.410904 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 14:55:12 crc kubenswrapper[4796]: I1125 14:55:12.057273 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-stpn7"] Nov 25 14:55:12 crc kubenswrapper[4796]: I1125 14:55:12.074273 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-stpn7"] Nov 25 14:55:12 crc kubenswrapper[4796]: I1125 14:55:12.427199 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b64285-eaa9-4677-aa4c-a16f0cffc2f8" path="/var/lib/kubelet/pods/02b64285-eaa9-4677-aa4c-a16f0cffc2f8/volumes" Nov 25 14:55:23 crc kubenswrapper[4796]: I1125 14:55:23.409839 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:55:24 crc kubenswrapper[4796]: I1125 14:55:24.114486 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"70d0d78805ff6ee84a8ea3338031d4b970ef33f4c653ba7d63ee0c9fa7a78f92"} Nov 25 14:55:31 crc kubenswrapper[4796]: I1125 14:55:31.035542 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kddwd"] Nov 25 14:55:31 crc kubenswrapper[4796]: I1125 14:55:31.054027 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kddwd"] Nov 25 14:55:32 crc kubenswrapper[4796]: I1125 14:55:32.421656 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3947d76-dff0-44d7-9b86-d2a0406db500" path="/var/lib/kubelet/pods/d3947d76-dff0-44d7-9b86-d2a0406db500/volumes" Nov 25 14:55:47 crc kubenswrapper[4796]: I1125 14:55:47.081222 4796 scope.go:117] "RemoveContainer" containerID="3d3e0703fe13c0316f521fa3c9dd09f327a2ea1690e7152d4c9b7354c6c0c159" Nov 25 14:55:47 crc kubenswrapper[4796]: I1125 14:55:47.136777 4796 scope.go:117] "RemoveContainer" containerID="9273adb8a7b2702f78b3ff186c214371c06a57b8d66d3d1ae12bc29558f29507" Nov 25 14:55:53 crc kubenswrapper[4796]: I1125 14:55:53.062341 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-n86kb"] Nov 25 14:55:53 crc kubenswrapper[4796]: I1125 14:55:53.082602 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-n86kb"] Nov 25 14:55:54 crc kubenswrapper[4796]: I1125 14:55:54.429402 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7" path="/var/lib/kubelet/pods/aecc2e8c-ded8-42b7-b1a3-df3eeedc84f7/volumes" Nov 25 14:55:55 crc kubenswrapper[4796]: I1125 14:55:55.069888 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-vmgbv"] Nov 25 14:55:55 crc kubenswrapper[4796]: I1125 14:55:55.083953 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-vmgbv"] Nov 25 14:55:56 crc kubenswrapper[4796]: I1125 14:55:56.421177 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e99260e-8b90-4cd0-8417-8dc3c142a743" path="/var/lib/kubelet/pods/1e99260e-8b90-4cd0-8417-8dc3c142a743/volumes" Nov 25 14:55:57 crc kubenswrapper[4796]: I1125 14:55:57.034384 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8fxjq"] Nov 25 14:55:57 crc kubenswrapper[4796]: I1125 14:55:57.046748 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8fxjq"] Nov 25 14:55:58 crc kubenswrapper[4796]: I1125 14:55:58.426348 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182a7451-724e-4649-a911-f26535ec04f9" path="/var/lib/kubelet/pods/182a7451-724e-4649-a911-f26535ec04f9/volumes" Nov 25 14:56:01 crc kubenswrapper[4796]: I1125 14:56:01.037525 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-ttn2n"] Nov 25 14:56:01 crc kubenswrapper[4796]: I1125 14:56:01.046089 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-ttn2n"] Nov 25 14:56:02 crc kubenswrapper[4796]: I1125 14:56:02.427385 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2" path="/var/lib/kubelet/pods/c9b39f97-e9b2-4bfd-85bb-e33ade48ddc2/volumes" Nov 25 14:56:10 crc kubenswrapper[4796]: I1125 14:56:10.604627 4796 generic.go:334] "Generic (PLEG): container finished" podID="9c76afe2-174a-4c31-a551-101661ae546b" containerID="277691886b5a7cf1ea3d4d2c6d4bd8496cc63b6a05d29d5af50097fc37b161b6" exitCode=0 Nov 25 14:56:10 crc kubenswrapper[4796]: I1125 14:56:10.604727 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" event={"ID":"9c76afe2-174a-4c31-a551-101661ae546b","Type":"ContainerDied","Data":"277691886b5a7cf1ea3d4d2c6d4bd8496cc63b6a05d29d5af50097fc37b161b6"} Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.121650 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.156525 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8zhh\" (UniqueName: \"kubernetes.io/projected/9c76afe2-174a-4c31-a551-101661ae546b-kube-api-access-q8zhh\") pod \"9c76afe2-174a-4c31-a551-101661ae546b\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.156632 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-inventory\") pod \"9c76afe2-174a-4c31-a551-101661ae546b\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.156771 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-ssh-key\") pod \"9c76afe2-174a-4c31-a551-101661ae546b\" (UID: \"9c76afe2-174a-4c31-a551-101661ae546b\") " Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.162353 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c76afe2-174a-4c31-a551-101661ae546b-kube-api-access-q8zhh" (OuterVolumeSpecName: "kube-api-access-q8zhh") pod "9c76afe2-174a-4c31-a551-101661ae546b" (UID: "9c76afe2-174a-4c31-a551-101661ae546b"). InnerVolumeSpecName "kube-api-access-q8zhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.194753 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-inventory" (OuterVolumeSpecName: "inventory") pod "9c76afe2-174a-4c31-a551-101661ae546b" (UID: "9c76afe2-174a-4c31-a551-101661ae546b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.200808 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9c76afe2-174a-4c31-a551-101661ae546b" (UID: "9c76afe2-174a-4c31-a551-101661ae546b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.259253 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8zhh\" (UniqueName: \"kubernetes.io/projected/9c76afe2-174a-4c31-a551-101661ae546b-kube-api-access-q8zhh\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.259292 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.259306 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c76afe2-174a-4c31-a551-101661ae546b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.630625 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" event={"ID":"9c76afe2-174a-4c31-a551-101661ae546b","Type":"ContainerDied","Data":"88420f5a7214f59b333e0f4cc4cb08096896b535db1c6b5b0e34fce3fa13ef3f"} Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.630676 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-h678c" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.630690 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88420f5a7214f59b333e0f4cc4cb08096896b535db1c6b5b0e34fce3fa13ef3f" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.735179 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh"] Nov 25 14:56:12 crc kubenswrapper[4796]: E1125 14:56:12.735811 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c76afe2-174a-4c31-a551-101661ae546b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.735844 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c76afe2-174a-4c31-a551-101661ae546b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.736195 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c76afe2-174a-4c31-a551-101661ae546b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.737204 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.742550 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.742800 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.743280 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.743457 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.767942 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.768058 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vszx4\" (UniqueName: \"kubernetes.io/projected/cb697a58-06f8-4133-bb60-109f14009dad-kube-api-access-vszx4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.768106 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.781194 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh"] Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.869155 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.869524 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vszx4\" (UniqueName: \"kubernetes.io/projected/cb697a58-06f8-4133-bb60-109f14009dad-kube-api-access-vszx4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.869583 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.874230 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.874933 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:12 crc kubenswrapper[4796]: I1125 14:56:12.889762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vszx4\" (UniqueName: \"kubernetes.io/projected/cb697a58-06f8-4133-bb60-109f14009dad-kube-api-access-vszx4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:13 crc kubenswrapper[4796]: I1125 14:56:13.063332 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:56:13 crc kubenswrapper[4796]: I1125 14:56:13.669657 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh"] Nov 25 14:56:13 crc kubenswrapper[4796]: W1125 14:56:13.676006 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb697a58_06f8_4133_bb60_109f14009dad.slice/crio-2666f0ea48396e7f2d8186b9bd0289046ec9c013af5dea02d4f5416bbcc593ae WatchSource:0}: Error finding container 2666f0ea48396e7f2d8186b9bd0289046ec9c013af5dea02d4f5416bbcc593ae: Status 404 returned error can't find the container with id 2666f0ea48396e7f2d8186b9bd0289046ec9c013af5dea02d4f5416bbcc593ae Nov 25 14:56:14 crc kubenswrapper[4796]: I1125 14:56:14.679313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" event={"ID":"cb697a58-06f8-4133-bb60-109f14009dad","Type":"ContainerStarted","Data":"999667ee3305b1429927be06a34eedeea6a5cd07863e27c18f81828a8673f99c"} Nov 25 14:56:14 crc kubenswrapper[4796]: I1125 14:56:14.679907 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" event={"ID":"cb697a58-06f8-4133-bb60-109f14009dad","Type":"ContainerStarted","Data":"2666f0ea48396e7f2d8186b9bd0289046ec9c013af5dea02d4f5416bbcc593ae"} Nov 25 14:56:14 crc kubenswrapper[4796]: I1125 14:56:14.704280 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" podStartSLOduration=2.259293786 podStartE2EDuration="2.704256391s" podCreationTimestamp="2025-11-25 14:56:12 +0000 UTC" firstStartedPulling="2025-11-25 14:56:13.679018001 +0000 UTC m=+1902.022127425" lastFinishedPulling="2025-11-25 14:56:14.123980606 +0000 UTC m=+1902.467090030" observedRunningTime="2025-11-25 14:56:14.699035456 +0000 UTC m=+1903.042144890" watchObservedRunningTime="2025-11-25 14:56:14.704256391 +0000 UTC m=+1903.047365815" Nov 25 14:56:17 crc kubenswrapper[4796]: I1125 14:56:17.054681 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tt8qv"] Nov 25 14:56:17 crc kubenswrapper[4796]: I1125 14:56:17.067932 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tt8qv"] Nov 25 14:56:18 crc kubenswrapper[4796]: I1125 14:56:18.426389 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0493d28-3276-4a85-a800-4d0b1576c407" path="/var/lib/kubelet/pods/b0493d28-3276-4a85-a800-4d0b1576c407/volumes" Nov 25 14:56:37 crc kubenswrapper[4796]: I1125 14:56:37.042706 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-mx7pg"] Nov 25 14:56:37 crc kubenswrapper[4796]: I1125 14:56:37.060498 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-mx7pg"] Nov 25 14:56:38 crc kubenswrapper[4796]: I1125 14:56:38.055246 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-l2nn8"] Nov 25 14:56:38 crc kubenswrapper[4796]: I1125 14:56:38.069147 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-l2nn8"] Nov 25 14:56:38 crc kubenswrapper[4796]: I1125 14:56:38.078986 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f0b0-account-create-h8v4z"] Nov 25 14:56:38 crc kubenswrapper[4796]: I1125 14:56:38.089560 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f0b0-account-create-h8v4z"] Nov 25 14:56:38 crc kubenswrapper[4796]: I1125 14:56:38.428223 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="391eabca-f0e8-49c8-b98b-c495a39f9d46" path="/var/lib/kubelet/pods/391eabca-f0e8-49c8-b98b-c495a39f9d46/volumes" Nov 25 14:56:38 crc kubenswrapper[4796]: I1125 14:56:38.430308 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d784bc2-02d6-423f-8286-74eae70a6986" path="/var/lib/kubelet/pods/7d784bc2-02d6-423f-8286-74eae70a6986/volumes" Nov 25 14:56:38 crc kubenswrapper[4796]: I1125 14:56:38.432008 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c147d18a-4e11-41c0-87fc-628ab428482b" path="/var/lib/kubelet/pods/c147d18a-4e11-41c0-87fc-628ab428482b/volumes" Nov 25 14:56:39 crc kubenswrapper[4796]: I1125 14:56:39.036011 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4v5qd"] Nov 25 14:56:39 crc kubenswrapper[4796]: I1125 14:56:39.047479 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-47c0-account-create-92x8f"] Nov 25 14:56:39 crc kubenswrapper[4796]: I1125 14:56:39.057473 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-476a-account-create-kw4p2"] Nov 25 14:56:39 crc kubenswrapper[4796]: I1125 14:56:39.065161 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4v5qd"] Nov 25 14:56:39 crc kubenswrapper[4796]: I1125 14:56:39.073512 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-47c0-account-create-92x8f"] Nov 25 14:56:39 crc kubenswrapper[4796]: I1125 14:56:39.081495 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-476a-account-create-kw4p2"] Nov 25 14:56:40 crc kubenswrapper[4796]: I1125 14:56:40.422121 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9" path="/var/lib/kubelet/pods/d203a2c3-7b83-4c3e-b151-d6fc27b0f4e9/volumes" Nov 25 14:56:40 crc kubenswrapper[4796]: I1125 14:56:40.423084 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44da81f-aeed-45e1-b7bc-3f4608f077f9" path="/var/lib/kubelet/pods/d44da81f-aeed-45e1-b7bc-3f4608f077f9/volumes" Nov 25 14:56:40 crc kubenswrapper[4796]: I1125 14:56:40.423751 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df314672-6ee5-4768-b4e4-34df7f3abfd1" path="/var/lib/kubelet/pods/df314672-6ee5-4768-b4e4-34df7f3abfd1/volumes" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.233962 4796 scope.go:117] "RemoveContainer" containerID="09f3104b61f642b98d2f0d8f9e593a2b57967e74c31b223316fefe3075fdb61b" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.274806 4796 scope.go:117] "RemoveContainer" containerID="363231601cdf33cbf27464f7007383d131448c1a1d91d2e1643a3a0e6aa09674" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.338604 4796 scope.go:117] "RemoveContainer" containerID="791f3fd2b3fbfada116efc5867cd0e47785f4ce89f4ee623c27749f69c6fe7e0" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.356973 4796 scope.go:117] "RemoveContainer" containerID="fa26e6e7264c7408342069a542d6d9878eeb2d814cd5084bb0807037339563cd" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.418523 4796 scope.go:117] "RemoveContainer" containerID="5bb708f073b921fc079417bc4beb5addd37b59b0c39606708ed53c498561c69f" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.448110 4796 scope.go:117] "RemoveContainer" containerID="463903737e97d6d64a9615a13c2ff45e3d614f97444e82a5efc8c97e7c7b0161" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.498678 4796 scope.go:117] "RemoveContainer" containerID="3c40d926051c3e978a39f64ee4616f5d1e54cf18a14935a1aad94403b40f713f" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.536967 4796 scope.go:117] "RemoveContainer" containerID="a4b36e21ccdfe4148ba49233cd5e90ca89db8ba31e9ac923ae97dcb756dbe492" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.571047 4796 scope.go:117] "RemoveContainer" containerID="b5f9a3f112bf73deb92d34f9303693dd31b99f2aa0bf345c73078397cc705f6e" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.619731 4796 scope.go:117] "RemoveContainer" containerID="2c63332215ffc1b35fcf28a45694b19de2296c9096f319e360944a2cfea88350" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.648709 4796 scope.go:117] "RemoveContainer" containerID="2c820b08564400d58237462f8ede8e7588c97b0ce47919f4edecbae522bcad32" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.667502 4796 scope.go:117] "RemoveContainer" containerID="b27d4c66e3d062f16abbcad6819f80948f578648bb56e4390908ae9603213037" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.686445 4796 scope.go:117] "RemoveContainer" containerID="f15cf1c30f4179a5137dda7c89dddd69166fdde7338cd1dc89f182ba3c5f36dc" Nov 25 14:56:47 crc kubenswrapper[4796]: I1125 14:56:47.712201 4796 scope.go:117] "RemoveContainer" containerID="94a71a1b0d3d6e415c65793e5d1bec17eadc7955b63ee7b563584f5a4d65d396" Nov 25 14:57:16 crc kubenswrapper[4796]: I1125 14:57:16.047436 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kpnjm"] Nov 25 14:57:16 crc kubenswrapper[4796]: I1125 14:57:16.061664 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kpnjm"] Nov 25 14:57:16 crc kubenswrapper[4796]: I1125 14:57:16.427900 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e7984e-9b94-436b-90f4-82e5253ac471" path="/var/lib/kubelet/pods/33e7984e-9b94-436b-90f4-82e5253ac471/volumes" Nov 25 14:57:27 crc kubenswrapper[4796]: I1125 14:57:27.439108 4796 generic.go:334] "Generic (PLEG): container finished" podID="cb697a58-06f8-4133-bb60-109f14009dad" containerID="999667ee3305b1429927be06a34eedeea6a5cd07863e27c18f81828a8673f99c" exitCode=0 Nov 25 14:57:27 crc kubenswrapper[4796]: I1125 14:57:27.439187 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" event={"ID":"cb697a58-06f8-4133-bb60-109f14009dad","Type":"ContainerDied","Data":"999667ee3305b1429927be06a34eedeea6a5cd07863e27c18f81828a8673f99c"} Nov 25 14:57:28 crc kubenswrapper[4796]: I1125 14:57:28.849170 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:57:28 crc kubenswrapper[4796]: I1125 14:57:28.968477 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-ssh-key\") pod \"cb697a58-06f8-4133-bb60-109f14009dad\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " Nov 25 14:57:28 crc kubenswrapper[4796]: I1125 14:57:28.968538 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vszx4\" (UniqueName: \"kubernetes.io/projected/cb697a58-06f8-4133-bb60-109f14009dad-kube-api-access-vszx4\") pod \"cb697a58-06f8-4133-bb60-109f14009dad\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " Nov 25 14:57:28 crc kubenswrapper[4796]: I1125 14:57:28.968933 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-inventory\") pod \"cb697a58-06f8-4133-bb60-109f14009dad\" (UID: \"cb697a58-06f8-4133-bb60-109f14009dad\") " Nov 25 14:57:28 crc kubenswrapper[4796]: I1125 14:57:28.973955 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb697a58-06f8-4133-bb60-109f14009dad-kube-api-access-vszx4" (OuterVolumeSpecName: "kube-api-access-vszx4") pod "cb697a58-06f8-4133-bb60-109f14009dad" (UID: "cb697a58-06f8-4133-bb60-109f14009dad"). InnerVolumeSpecName "kube-api-access-vszx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:57:28 crc kubenswrapper[4796]: I1125 14:57:28.996730 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-inventory" (OuterVolumeSpecName: "inventory") pod "cb697a58-06f8-4133-bb60-109f14009dad" (UID: "cb697a58-06f8-4133-bb60-109f14009dad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:57:28 crc kubenswrapper[4796]: I1125 14:57:28.998154 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cb697a58-06f8-4133-bb60-109f14009dad" (UID: "cb697a58-06f8-4133-bb60-109f14009dad"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.071355 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.071401 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vszx4\" (UniqueName: \"kubernetes.io/projected/cb697a58-06f8-4133-bb60-109f14009dad-kube-api-access-vszx4\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.071443 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb697a58-06f8-4133-bb60-109f14009dad-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.470007 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" event={"ID":"cb697a58-06f8-4133-bb60-109f14009dad","Type":"ContainerDied","Data":"2666f0ea48396e7f2d8186b9bd0289046ec9c013af5dea02d4f5416bbcc593ae"} Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.470049 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2666f0ea48396e7f2d8186b9bd0289046ec9c013af5dea02d4f5416bbcc593ae" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.470102 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.552186 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw"] Nov 25 14:57:29 crc kubenswrapper[4796]: E1125 14:57:29.552625 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb697a58-06f8-4133-bb60-109f14009dad" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.552648 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb697a58-06f8-4133-bb60-109f14009dad" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.552925 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb697a58-06f8-4133-bb60-109f14009dad" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.553798 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.555835 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.560094 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.560497 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.560251 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.567023 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw"] Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.685788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7psgw\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.685886 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7psgw\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.686038 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p59g\" (UniqueName: \"kubernetes.io/projected/f7f8ec51-957f-4356-888b-5bec99691717-kube-api-access-7p59g\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7psgw\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.787455 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p59g\" (UniqueName: \"kubernetes.io/projected/f7f8ec51-957f-4356-888b-5bec99691717-kube-api-access-7p59g\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7psgw\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.787740 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7psgw\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.787807 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7psgw\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.796223 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7psgw\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.802107 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7psgw\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.806544 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p59g\" (UniqueName: \"kubernetes.io/projected/f7f8ec51-957f-4356-888b-5bec99691717-kube-api-access-7p59g\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7psgw\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:29 crc kubenswrapper[4796]: I1125 14:57:29.870316 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:30 crc kubenswrapper[4796]: I1125 14:57:30.432664 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw"] Nov 25 14:57:30 crc kubenswrapper[4796]: I1125 14:57:30.480740 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" event={"ID":"f7f8ec51-957f-4356-888b-5bec99691717","Type":"ContainerStarted","Data":"f3fc2abcce7e4d15a0cae8992bf3925b4922aad5eb696c88ed0b7ad43fea28eb"} Nov 25 14:57:32 crc kubenswrapper[4796]: I1125 14:57:32.502956 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" event={"ID":"f7f8ec51-957f-4356-888b-5bec99691717","Type":"ContainerStarted","Data":"d07a6ae22ad63c7a9fc4ce3c32acff3c2ca5f9733a15360d6f4dd4ae99c81539"} Nov 25 14:57:32 crc kubenswrapper[4796]: I1125 14:57:32.527222 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" podStartSLOduration=2.120049957 podStartE2EDuration="3.527201823s" podCreationTimestamp="2025-11-25 14:57:29 +0000 UTC" firstStartedPulling="2025-11-25 14:57:30.433175367 +0000 UTC m=+1978.776284831" lastFinishedPulling="2025-11-25 14:57:31.840327263 +0000 UTC m=+1980.183436697" observedRunningTime="2025-11-25 14:57:32.520807102 +0000 UTC m=+1980.863916536" watchObservedRunningTime="2025-11-25 14:57:32.527201823 +0000 UTC m=+1980.870311267" Nov 25 14:57:37 crc kubenswrapper[4796]: I1125 14:57:37.582660 4796 generic.go:334] "Generic (PLEG): container finished" podID="f7f8ec51-957f-4356-888b-5bec99691717" containerID="d07a6ae22ad63c7a9fc4ce3c32acff3c2ca5f9733a15360d6f4dd4ae99c81539" exitCode=0 Nov 25 14:57:37 crc kubenswrapper[4796]: I1125 14:57:37.582796 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" event={"ID":"f7f8ec51-957f-4356-888b-5bec99691717","Type":"ContainerDied","Data":"d07a6ae22ad63c7a9fc4ce3c32acff3c2ca5f9733a15360d6f4dd4ae99c81539"} Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.097157 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.215247 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p59g\" (UniqueName: \"kubernetes.io/projected/f7f8ec51-957f-4356-888b-5bec99691717-kube-api-access-7p59g\") pod \"f7f8ec51-957f-4356-888b-5bec99691717\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.215448 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-ssh-key\") pod \"f7f8ec51-957f-4356-888b-5bec99691717\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.215494 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-inventory\") pod \"f7f8ec51-957f-4356-888b-5bec99691717\" (UID: \"f7f8ec51-957f-4356-888b-5bec99691717\") " Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.221616 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7f8ec51-957f-4356-888b-5bec99691717-kube-api-access-7p59g" (OuterVolumeSpecName: "kube-api-access-7p59g") pod "f7f8ec51-957f-4356-888b-5bec99691717" (UID: "f7f8ec51-957f-4356-888b-5bec99691717"). InnerVolumeSpecName "kube-api-access-7p59g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.271831 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f7f8ec51-957f-4356-888b-5bec99691717" (UID: "f7f8ec51-957f-4356-888b-5bec99691717"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.288817 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-inventory" (OuterVolumeSpecName: "inventory") pod "f7f8ec51-957f-4356-888b-5bec99691717" (UID: "f7f8ec51-957f-4356-888b-5bec99691717"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.317793 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p59g\" (UniqueName: \"kubernetes.io/projected/f7f8ec51-957f-4356-888b-5bec99691717-kube-api-access-7p59g\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.317833 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.317843 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7f8ec51-957f-4356-888b-5bec99691717-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.607261 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" event={"ID":"f7f8ec51-957f-4356-888b-5bec99691717","Type":"ContainerDied","Data":"f3fc2abcce7e4d15a0cae8992bf3925b4922aad5eb696c88ed0b7ad43fea28eb"} Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.607339 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3fc2abcce7e4d15a0cae8992bf3925b4922aad5eb696c88ed0b7ad43fea28eb" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.607334 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7psgw" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.706410 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59"] Nov 25 14:57:39 crc kubenswrapper[4796]: E1125 14:57:39.707168 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f8ec51-957f-4356-888b-5bec99691717" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.707197 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f8ec51-957f-4356-888b-5bec99691717" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.707470 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f8ec51-957f-4356-888b-5bec99691717" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.708495 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.711402 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.715933 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59"] Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.716556 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.717072 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.717322 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.827564 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db5lb\" (UniqueName: \"kubernetes.io/projected/e7c0033b-a387-447e-89cf-43e3a0f237d0-kube-api-access-db5lb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvt59\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.827856 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvt59\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.828020 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvt59\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.936458 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db5lb\" (UniqueName: \"kubernetes.io/projected/e7c0033b-a387-447e-89cf-43e3a0f237d0-kube-api-access-db5lb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvt59\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.936871 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvt59\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.936914 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvt59\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.946818 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvt59\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.949220 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvt59\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:39 crc kubenswrapper[4796]: I1125 14:57:39.970481 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db5lb\" (UniqueName: \"kubernetes.io/projected/e7c0033b-a387-447e-89cf-43e3a0f237d0-kube-api-access-db5lb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-wvt59\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:40 crc kubenswrapper[4796]: I1125 14:57:40.027354 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:57:40 crc kubenswrapper[4796]: I1125 14:57:40.607499 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59"] Nov 25 14:57:41 crc kubenswrapper[4796]: I1125 14:57:41.629129 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" event={"ID":"e7c0033b-a387-447e-89cf-43e3a0f237d0","Type":"ContainerStarted","Data":"67fde3dce8bc7fc8e19e43d5c0cecbcf63fbbef9400ceff0b8c0e76f497c70ee"} Nov 25 14:57:42 crc kubenswrapper[4796]: I1125 14:57:42.639246 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" event={"ID":"e7c0033b-a387-447e-89cf-43e3a0f237d0","Type":"ContainerStarted","Data":"d81d5b61a730a5a01b0355ff36b1eaede6a3b8c0bc535e3a3f6384392bc63011"} Nov 25 14:57:42 crc kubenswrapper[4796]: I1125 14:57:42.667012 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" podStartSLOduration=2.872088306 podStartE2EDuration="3.666988216s" podCreationTimestamp="2025-11-25 14:57:39 +0000 UTC" firstStartedPulling="2025-11-25 14:57:40.615260886 +0000 UTC m=+1988.958370330" lastFinishedPulling="2025-11-25 14:57:41.410160816 +0000 UTC m=+1989.753270240" observedRunningTime="2025-11-25 14:57:42.659335785 +0000 UTC m=+1991.002445259" watchObservedRunningTime="2025-11-25 14:57:42.666988216 +0000 UTC m=+1991.010097640" Nov 25 14:57:47 crc kubenswrapper[4796]: I1125 14:57:47.955436 4796 scope.go:117] "RemoveContainer" containerID="2b6c58762b86fc60c47b6ebb02dead1149ab2e31713bab09b35bd04c117eefea" Nov 25 14:57:49 crc kubenswrapper[4796]: I1125 14:57:49.513796 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:57:49 crc kubenswrapper[4796]: I1125 14:57:49.514243 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:58:19 crc kubenswrapper[4796]: I1125 14:58:19.514524 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:58:19 crc kubenswrapper[4796]: I1125 14:58:19.515164 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:58:23 crc kubenswrapper[4796]: I1125 14:58:23.044310 4796 generic.go:334] "Generic (PLEG): container finished" podID="e7c0033b-a387-447e-89cf-43e3a0f237d0" containerID="d81d5b61a730a5a01b0355ff36b1eaede6a3b8c0bc535e3a3f6384392bc63011" exitCode=0 Nov 25 14:58:23 crc kubenswrapper[4796]: I1125 14:58:23.044379 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" event={"ID":"e7c0033b-a387-447e-89cf-43e3a0f237d0","Type":"ContainerDied","Data":"d81d5b61a730a5a01b0355ff36b1eaede6a3b8c0bc535e3a3f6384392bc63011"} Nov 25 14:58:23 crc kubenswrapper[4796]: I1125 14:58:23.052038 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-r97qb"] Nov 25 14:58:23 crc kubenswrapper[4796]: I1125 14:58:23.066028 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-r97qb"] Nov 25 14:58:24 crc kubenswrapper[4796]: I1125 14:58:24.422917 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf005684-c69a-4402-8a4d-82ea423e1902" path="/var/lib/kubelet/pods/cf005684-c69a-4402-8a4d-82ea423e1902/volumes" Nov 25 14:58:24 crc kubenswrapper[4796]: I1125 14:58:24.495123 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:58:24 crc kubenswrapper[4796]: I1125 14:58:24.534205 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-inventory\") pod \"e7c0033b-a387-447e-89cf-43e3a0f237d0\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " Nov 25 14:58:24 crc kubenswrapper[4796]: I1125 14:58:24.534282 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-ssh-key\") pod \"e7c0033b-a387-447e-89cf-43e3a0f237d0\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " Nov 25 14:58:24 crc kubenswrapper[4796]: I1125 14:58:24.534375 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db5lb\" (UniqueName: \"kubernetes.io/projected/e7c0033b-a387-447e-89cf-43e3a0f237d0-kube-api-access-db5lb\") pod \"e7c0033b-a387-447e-89cf-43e3a0f237d0\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " Nov 25 14:58:24 crc kubenswrapper[4796]: I1125 14:58:24.544724 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c0033b-a387-447e-89cf-43e3a0f237d0-kube-api-access-db5lb" (OuterVolumeSpecName: "kube-api-access-db5lb") pod "e7c0033b-a387-447e-89cf-43e3a0f237d0" (UID: "e7c0033b-a387-447e-89cf-43e3a0f237d0"). InnerVolumeSpecName "kube-api-access-db5lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:24 crc kubenswrapper[4796]: E1125 14:58:24.558992 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-ssh-key podName:e7c0033b-a387-447e-89cf-43e3a0f237d0 nodeName:}" failed. No retries permitted until 2025-11-25 14:58:25.058958648 +0000 UTC m=+2033.402068082 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key" (UniqueName: "kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-ssh-key") pod "e7c0033b-a387-447e-89cf-43e3a0f237d0" (UID: "e7c0033b-a387-447e-89cf-43e3a0f237d0") : error deleting /var/lib/kubelet/pods/e7c0033b-a387-447e-89cf-43e3a0f237d0/volume-subpaths: remove /var/lib/kubelet/pods/e7c0033b-a387-447e-89cf-43e3a0f237d0/volume-subpaths: no such file or directory Nov 25 14:58:24 crc kubenswrapper[4796]: I1125 14:58:24.561367 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-inventory" (OuterVolumeSpecName: "inventory") pod "e7c0033b-a387-447e-89cf-43e3a0f237d0" (UID: "e7c0033b-a387-447e-89cf-43e3a0f237d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:24 crc kubenswrapper[4796]: I1125 14:58:24.636316 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:24 crc kubenswrapper[4796]: I1125 14:58:24.636349 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db5lb\" (UniqueName: \"kubernetes.io/projected/e7c0033b-a387-447e-89cf-43e3a0f237d0-kube-api-access-db5lb\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.065475 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" event={"ID":"e7c0033b-a387-447e-89cf-43e3a0f237d0","Type":"ContainerDied","Data":"67fde3dce8bc7fc8e19e43d5c0cecbcf63fbbef9400ceff0b8c0e76f497c70ee"} Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.065536 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67fde3dce8bc7fc8e19e43d5c0cecbcf63fbbef9400ceff0b8c0e76f497c70ee" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.065560 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-wvt59" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.144746 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-ssh-key\") pod \"e7c0033b-a387-447e-89cf-43e3a0f237d0\" (UID: \"e7c0033b-a387-447e-89cf-43e3a0f237d0\") " Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.151422 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e7c0033b-a387-447e-89cf-43e3a0f237d0" (UID: "e7c0033b-a387-447e-89cf-43e3a0f237d0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.168930 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g"] Nov 25 14:58:25 crc kubenswrapper[4796]: E1125 14:58:25.169433 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c0033b-a387-447e-89cf-43e3a0f237d0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.169460 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c0033b-a387-447e-89cf-43e3a0f237d0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.169744 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c0033b-a387-447e-89cf-43e3a0f237d0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.170551 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.176874 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g"] Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.246763 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh95k\" (UniqueName: \"kubernetes.io/projected/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-kube-api-access-dh95k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7x58g\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.246882 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7x58g\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.246994 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7x58g\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.247072 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7c0033b-a387-447e-89cf-43e3a0f237d0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.348390 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh95k\" (UniqueName: \"kubernetes.io/projected/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-kube-api-access-dh95k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7x58g\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.348508 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7x58g\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.348547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7x58g\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.352345 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7x58g\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.352454 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7x58g\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.365858 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh95k\" (UniqueName: \"kubernetes.io/projected/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-kube-api-access-dh95k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7x58g\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:25 crc kubenswrapper[4796]: I1125 14:58:25.541654 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:58:26 crc kubenswrapper[4796]: I1125 14:58:26.042310 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sd7sr"] Nov 25 14:58:26 crc kubenswrapper[4796]: I1125 14:58:26.052284 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sd7sr"] Nov 25 14:58:26 crc kubenswrapper[4796]: I1125 14:58:26.117492 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g"] Nov 25 14:58:26 crc kubenswrapper[4796]: I1125 14:58:26.420063 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0249bda-04f8-417e-bc09-c57484f3a607" path="/var/lib/kubelet/pods/c0249bda-04f8-417e-bc09-c57484f3a607/volumes" Nov 25 14:58:27 crc kubenswrapper[4796]: I1125 14:58:27.085670 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" event={"ID":"7ee7821f-7c42-4833-bdda-e32b06b2e1b8","Type":"ContainerStarted","Data":"1c12467637359858fa5d46d88844e7c25b9130fdba5bf7fa5cb8e42d4c0afa14"} Nov 25 14:58:27 crc kubenswrapper[4796]: I1125 14:58:27.087054 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" event={"ID":"7ee7821f-7c42-4833-bdda-e32b06b2e1b8","Type":"ContainerStarted","Data":"6a7d34ccdf1392b57e7c6c1223b74521b2ec6c2e95893e974747565a09457a0e"} Nov 25 14:58:27 crc kubenswrapper[4796]: I1125 14:58:27.102762 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" podStartSLOduration=1.6973883459999999 podStartE2EDuration="2.1027406s" podCreationTimestamp="2025-11-25 14:58:25 +0000 UTC" firstStartedPulling="2025-11-25 14:58:26.126958821 +0000 UTC m=+2034.470068255" lastFinishedPulling="2025-11-25 14:58:26.532311075 +0000 UTC m=+2034.875420509" observedRunningTime="2025-11-25 14:58:27.099646842 +0000 UTC m=+2035.442756276" watchObservedRunningTime="2025-11-25 14:58:27.1027406 +0000 UTC m=+2035.445850054" Nov 25 14:58:48 crc kubenswrapper[4796]: I1125 14:58:48.060971 4796 scope.go:117] "RemoveContainer" containerID="2316935db37d883c3cf95cf2cefed00ee2aebccdd2c67757fb69b230a280b356" Nov 25 14:58:48 crc kubenswrapper[4796]: I1125 14:58:48.110848 4796 scope.go:117] "RemoveContainer" containerID="58974433ef838816cc6c7ceab6c45bb4e2c9442206cf48282249f48517f0f8ca" Nov 25 14:58:49 crc kubenswrapper[4796]: I1125 14:58:49.514147 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:58:49 crc kubenswrapper[4796]: I1125 14:58:49.514535 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:58:49 crc kubenswrapper[4796]: I1125 14:58:49.514619 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 14:58:49 crc kubenswrapper[4796]: I1125 14:58:49.515709 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70d0d78805ff6ee84a8ea3338031d4b970ef33f4c653ba7d63ee0c9fa7a78f92"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:58:49 crc kubenswrapper[4796]: I1125 14:58:49.515807 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://70d0d78805ff6ee84a8ea3338031d4b970ef33f4c653ba7d63ee0c9fa7a78f92" gracePeriod=600 Nov 25 14:58:50 crc kubenswrapper[4796]: I1125 14:58:50.298336 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="70d0d78805ff6ee84a8ea3338031d4b970ef33f4c653ba7d63ee0c9fa7a78f92" exitCode=0 Nov 25 14:58:50 crc kubenswrapper[4796]: I1125 14:58:50.298379 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"70d0d78805ff6ee84a8ea3338031d4b970ef33f4c653ba7d63ee0c9fa7a78f92"} Nov 25 14:58:50 crc kubenswrapper[4796]: I1125 14:58:50.298418 4796 scope.go:117] "RemoveContainer" containerID="62557f2302dc4c7d82aaf54efb06e3b1b825a25803859c584bdff916fe8be1e7" Nov 25 14:58:51 crc kubenswrapper[4796]: I1125 14:58:51.307010 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0"} Nov 25 14:59:06 crc kubenswrapper[4796]: I1125 14:59:06.047297 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7tcp"] Nov 25 14:59:06 crc kubenswrapper[4796]: I1125 14:59:06.059494 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-j7tcp"] Nov 25 14:59:06 crc kubenswrapper[4796]: I1125 14:59:06.423868 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cd58711-9b15-46ab-b8bf-98c0a3916fd3" path="/var/lib/kubelet/pods/1cd58711-9b15-46ab-b8bf-98c0a3916fd3/volumes" Nov 25 14:59:18 crc kubenswrapper[4796]: I1125 14:59:18.534561 4796 generic.go:334] "Generic (PLEG): container finished" podID="7ee7821f-7c42-4833-bdda-e32b06b2e1b8" containerID="1c12467637359858fa5d46d88844e7c25b9130fdba5bf7fa5cb8e42d4c0afa14" exitCode=0 Nov 25 14:59:18 crc kubenswrapper[4796]: I1125 14:59:18.534838 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" event={"ID":"7ee7821f-7c42-4833-bdda-e32b06b2e1b8","Type":"ContainerDied","Data":"1c12467637359858fa5d46d88844e7c25b9130fdba5bf7fa5cb8e42d4c0afa14"} Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.066681 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.195237 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-ssh-key\") pod \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.195560 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-inventory\") pod \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.195658 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh95k\" (UniqueName: \"kubernetes.io/projected/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-kube-api-access-dh95k\") pod \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\" (UID: \"7ee7821f-7c42-4833-bdda-e32b06b2e1b8\") " Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.205198 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-kube-api-access-dh95k" (OuterVolumeSpecName: "kube-api-access-dh95k") pod "7ee7821f-7c42-4833-bdda-e32b06b2e1b8" (UID: "7ee7821f-7c42-4833-bdda-e32b06b2e1b8"). InnerVolumeSpecName "kube-api-access-dh95k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.223314 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-inventory" (OuterVolumeSpecName: "inventory") pod "7ee7821f-7c42-4833-bdda-e32b06b2e1b8" (UID: "7ee7821f-7c42-4833-bdda-e32b06b2e1b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.230712 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7ee7821f-7c42-4833-bdda-e32b06b2e1b8" (UID: "7ee7821f-7c42-4833-bdda-e32b06b2e1b8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.298356 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.298407 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.298420 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh95k\" (UniqueName: \"kubernetes.io/projected/7ee7821f-7c42-4833-bdda-e32b06b2e1b8-kube-api-access-dh95k\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.565840 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" event={"ID":"7ee7821f-7c42-4833-bdda-e32b06b2e1b8","Type":"ContainerDied","Data":"6a7d34ccdf1392b57e7c6c1223b74521b2ec6c2e95893e974747565a09457a0e"} Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.565884 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a7d34ccdf1392b57e7c6c1223b74521b2ec6c2e95893e974747565a09457a0e" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.565900 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7x58g" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.629255 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8zlff"] Nov 25 14:59:20 crc kubenswrapper[4796]: E1125 14:59:20.629774 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee7821f-7c42-4833-bdda-e32b06b2e1b8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.629798 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee7821f-7c42-4833-bdda-e32b06b2e1b8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.630391 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee7821f-7c42-4833-bdda-e32b06b2e1b8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.631449 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.634444 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.634640 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.635495 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.639513 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.643891 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8zlff"] Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.706105 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8zlff\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.706245 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7t6\" (UniqueName: \"kubernetes.io/projected/60a504c5-7f00-43a4-a364-c3be0b31a42d-kube-api-access-9w7t6\") pod \"ssh-known-hosts-edpm-deployment-8zlff\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.706298 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8zlff\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.808982 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8zlff\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.809176 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7t6\" (UniqueName: \"kubernetes.io/projected/60a504c5-7f00-43a4-a364-c3be0b31a42d-kube-api-access-9w7t6\") pod \"ssh-known-hosts-edpm-deployment-8zlff\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.809320 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8zlff\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.815322 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8zlff\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.816046 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8zlff\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.836512 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7t6\" (UniqueName: \"kubernetes.io/projected/60a504c5-7f00-43a4-a364-c3be0b31a42d-kube-api-access-9w7t6\") pod \"ssh-known-hosts-edpm-deployment-8zlff\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:20 crc kubenswrapper[4796]: I1125 14:59:20.946969 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:21 crc kubenswrapper[4796]: I1125 14:59:21.523724 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8zlff"] Nov 25 14:59:21 crc kubenswrapper[4796]: I1125 14:59:21.574998 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" event={"ID":"60a504c5-7f00-43a4-a364-c3be0b31a42d","Type":"ContainerStarted","Data":"bc480c2d48fc64a016dec16077f593889c8b209dc5ad3b154617cb73cc6aaa3b"} Nov 25 14:59:22 crc kubenswrapper[4796]: I1125 14:59:22.583936 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" event={"ID":"60a504c5-7f00-43a4-a364-c3be0b31a42d","Type":"ContainerStarted","Data":"b84b0b5ad6c245c32b9feba0a4b71fcd5ede78d45cd1a6f6b5a1f16293221980"} Nov 25 14:59:22 crc kubenswrapper[4796]: I1125 14:59:22.603445 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" podStartSLOduration=1.961959545 podStartE2EDuration="2.603426001s" podCreationTimestamp="2025-11-25 14:59:20 +0000 UTC" firstStartedPulling="2025-11-25 14:59:21.525814028 +0000 UTC m=+2089.868923482" lastFinishedPulling="2025-11-25 14:59:22.167280484 +0000 UTC m=+2090.510389938" observedRunningTime="2025-11-25 14:59:22.601616304 +0000 UTC m=+2090.944725728" watchObservedRunningTime="2025-11-25 14:59:22.603426001 +0000 UTC m=+2090.946535425" Nov 25 14:59:22 crc kubenswrapper[4796]: I1125 14:59:22.909883 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ndhm9"] Nov 25 14:59:22 crc kubenswrapper[4796]: I1125 14:59:22.914409 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:22 crc kubenswrapper[4796]: I1125 14:59:22.967202 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndhm9"] Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.056436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-catalog-content\") pod \"redhat-operators-ndhm9\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.056658 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-utilities\") pod \"redhat-operators-ndhm9\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.056874 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnlx9\" (UniqueName: \"kubernetes.io/projected/70a1f5c9-312b-433c-b0ef-2390ae708943-kube-api-access-gnlx9\") pod \"redhat-operators-ndhm9\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.162003 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-catalog-content\") pod \"redhat-operators-ndhm9\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.162100 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-utilities\") pod \"redhat-operators-ndhm9\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.162173 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnlx9\" (UniqueName: \"kubernetes.io/projected/70a1f5c9-312b-433c-b0ef-2390ae708943-kube-api-access-gnlx9\") pod \"redhat-operators-ndhm9\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.163090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-catalog-content\") pod \"redhat-operators-ndhm9\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.163350 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-utilities\") pod \"redhat-operators-ndhm9\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.190900 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnlx9\" (UniqueName: \"kubernetes.io/projected/70a1f5c9-312b-433c-b0ef-2390ae708943-kube-api-access-gnlx9\") pod \"redhat-operators-ndhm9\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.233636 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:23 crc kubenswrapper[4796]: I1125 14:59:23.746139 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndhm9"] Nov 25 14:59:24 crc kubenswrapper[4796]: I1125 14:59:24.600229 4796 generic.go:334] "Generic (PLEG): container finished" podID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerID="88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01" exitCode=0 Nov 25 14:59:24 crc kubenswrapper[4796]: I1125 14:59:24.600284 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndhm9" event={"ID":"70a1f5c9-312b-433c-b0ef-2390ae708943","Type":"ContainerDied","Data":"88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01"} Nov 25 14:59:24 crc kubenswrapper[4796]: I1125 14:59:24.600517 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndhm9" event={"ID":"70a1f5c9-312b-433c-b0ef-2390ae708943","Type":"ContainerStarted","Data":"b417b23d29b6c686d2c2363acb3036a0b648f8dc25af5ba4dfdd8660c201f8b0"} Nov 25 14:59:26 crc kubenswrapper[4796]: I1125 14:59:26.621161 4796 generic.go:334] "Generic (PLEG): container finished" podID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerID="68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1" exitCode=0 Nov 25 14:59:26 crc kubenswrapper[4796]: I1125 14:59:26.621566 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndhm9" event={"ID":"70a1f5c9-312b-433c-b0ef-2390ae708943","Type":"ContainerDied","Data":"68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1"} Nov 25 14:59:27 crc kubenswrapper[4796]: I1125 14:59:27.631354 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndhm9" event={"ID":"70a1f5c9-312b-433c-b0ef-2390ae708943","Type":"ContainerStarted","Data":"fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed"} Nov 25 14:59:28 crc kubenswrapper[4796]: I1125 14:59:28.656215 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ndhm9" podStartSLOduration=4.00377208 podStartE2EDuration="6.656195811s" podCreationTimestamp="2025-11-25 14:59:22 +0000 UTC" firstStartedPulling="2025-11-25 14:59:24.60280632 +0000 UTC m=+2092.945915744" lastFinishedPulling="2025-11-25 14:59:27.255230051 +0000 UTC m=+2095.598339475" observedRunningTime="2025-11-25 14:59:28.653871638 +0000 UTC m=+2096.996981082" watchObservedRunningTime="2025-11-25 14:59:28.656195811 +0000 UTC m=+2096.999305235" Nov 25 14:59:30 crc kubenswrapper[4796]: I1125 14:59:30.659206 4796 generic.go:334] "Generic (PLEG): container finished" podID="60a504c5-7f00-43a4-a364-c3be0b31a42d" containerID="b84b0b5ad6c245c32b9feba0a4b71fcd5ede78d45cd1a6f6b5a1f16293221980" exitCode=0 Nov 25 14:59:30 crc kubenswrapper[4796]: I1125 14:59:30.659278 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" event={"ID":"60a504c5-7f00-43a4-a364-c3be0b31a42d","Type":"ContainerDied","Data":"b84b0b5ad6c245c32b9feba0a4b71fcd5ede78d45cd1a6f6b5a1f16293221980"} Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.106231 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.250458 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w7t6\" (UniqueName: \"kubernetes.io/projected/60a504c5-7f00-43a4-a364-c3be0b31a42d-kube-api-access-9w7t6\") pod \"60a504c5-7f00-43a4-a364-c3be0b31a42d\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.250595 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-ssh-key-openstack-edpm-ipam\") pod \"60a504c5-7f00-43a4-a364-c3be0b31a42d\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.250678 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-inventory-0\") pod \"60a504c5-7f00-43a4-a364-c3be0b31a42d\" (UID: \"60a504c5-7f00-43a4-a364-c3be0b31a42d\") " Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.273069 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a504c5-7f00-43a4-a364-c3be0b31a42d-kube-api-access-9w7t6" (OuterVolumeSpecName: "kube-api-access-9w7t6") pod "60a504c5-7f00-43a4-a364-c3be0b31a42d" (UID: "60a504c5-7f00-43a4-a364-c3be0b31a42d"). InnerVolumeSpecName "kube-api-access-9w7t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.286416 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "60a504c5-7f00-43a4-a364-c3be0b31a42d" (UID: "60a504c5-7f00-43a4-a364-c3be0b31a42d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.298642 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "60a504c5-7f00-43a4-a364-c3be0b31a42d" (UID: "60a504c5-7f00-43a4-a364-c3be0b31a42d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.353872 4796 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.353917 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w7t6\" (UniqueName: \"kubernetes.io/projected/60a504c5-7f00-43a4-a364-c3be0b31a42d-kube-api-access-9w7t6\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.353934 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60a504c5-7f00-43a4-a364-c3be0b31a42d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.677013 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" event={"ID":"60a504c5-7f00-43a4-a364-c3be0b31a42d","Type":"ContainerDied","Data":"bc480c2d48fc64a016dec16077f593889c8b209dc5ad3b154617cb73cc6aaa3b"} Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.677060 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc480c2d48fc64a016dec16077f593889c8b209dc5ad3b154617cb73cc6aaa3b" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.677122 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8zlff" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.743500 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87"] Nov 25 14:59:32 crc kubenswrapper[4796]: E1125 14:59:32.744292 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a504c5-7f00-43a4-a364-c3be0b31a42d" containerName="ssh-known-hosts-edpm-deployment" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.744416 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a504c5-7f00-43a4-a364-c3be0b31a42d" containerName="ssh-known-hosts-edpm-deployment" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.744760 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a504c5-7f00-43a4-a364-c3be0b31a42d" containerName="ssh-known-hosts-edpm-deployment" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.745698 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.748117 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.748260 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.748646 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.748915 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.770206 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87"] Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.863042 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8pk87\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.863403 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8pk87\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.863514 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n9hm\" (UniqueName: \"kubernetes.io/projected/39a7e6ad-f344-409f-b5a0-664a602fdf66-kube-api-access-6n9hm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8pk87\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.966752 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8pk87\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.967008 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8pk87\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.967280 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n9hm\" (UniqueName: \"kubernetes.io/projected/39a7e6ad-f344-409f-b5a0-664a602fdf66-kube-api-access-6n9hm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8pk87\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.973497 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8pk87\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.973849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8pk87\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:32 crc kubenswrapper[4796]: I1125 14:59:32.990147 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n9hm\" (UniqueName: \"kubernetes.io/projected/39a7e6ad-f344-409f-b5a0-664a602fdf66-kube-api-access-6n9hm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8pk87\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:33 crc kubenswrapper[4796]: I1125 14:59:33.065945 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:33 crc kubenswrapper[4796]: I1125 14:59:33.235120 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:33 crc kubenswrapper[4796]: I1125 14:59:33.235171 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:33 crc kubenswrapper[4796]: I1125 14:59:33.296798 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:33 crc kubenswrapper[4796]: I1125 14:59:33.653941 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87"] Nov 25 14:59:33 crc kubenswrapper[4796]: I1125 14:59:33.665244 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 14:59:33 crc kubenswrapper[4796]: I1125 14:59:33.690004 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" event={"ID":"39a7e6ad-f344-409f-b5a0-664a602fdf66","Type":"ContainerStarted","Data":"40f5e1a5791bb6266f0c20e017cd34d3787cc9e457ce67149131637e38928fbb"} Nov 25 14:59:33 crc kubenswrapper[4796]: I1125 14:59:33.749534 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:33 crc kubenswrapper[4796]: I1125 14:59:33.797175 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ndhm9"] Nov 25 14:59:34 crc kubenswrapper[4796]: I1125 14:59:34.699616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" event={"ID":"39a7e6ad-f344-409f-b5a0-664a602fdf66","Type":"ContainerStarted","Data":"30f709884c9c35baf0c6e8f104087a7fd832e80439dcb92e560cf3b5edeaf334"} Nov 25 14:59:34 crc kubenswrapper[4796]: I1125 14:59:34.722521 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" podStartSLOduration=1.9660309919999999 podStartE2EDuration="2.722499019s" podCreationTimestamp="2025-11-25 14:59:32 +0000 UTC" firstStartedPulling="2025-11-25 14:59:33.664899407 +0000 UTC m=+2102.008008871" lastFinishedPulling="2025-11-25 14:59:34.421367464 +0000 UTC m=+2102.764476898" observedRunningTime="2025-11-25 14:59:34.714512987 +0000 UTC m=+2103.057622431" watchObservedRunningTime="2025-11-25 14:59:34.722499019 +0000 UTC m=+2103.065608443" Nov 25 14:59:35 crc kubenswrapper[4796]: I1125 14:59:35.710245 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ndhm9" podUID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerName="registry-server" containerID="cri-o://fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed" gracePeriod=2 Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.187714 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.346741 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-catalog-content\") pod \"70a1f5c9-312b-433c-b0ef-2390ae708943\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.346991 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-utilities\") pod \"70a1f5c9-312b-433c-b0ef-2390ae708943\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.347053 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnlx9\" (UniqueName: \"kubernetes.io/projected/70a1f5c9-312b-433c-b0ef-2390ae708943-kube-api-access-gnlx9\") pod \"70a1f5c9-312b-433c-b0ef-2390ae708943\" (UID: \"70a1f5c9-312b-433c-b0ef-2390ae708943\") " Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.347922 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-utilities" (OuterVolumeSpecName: "utilities") pod "70a1f5c9-312b-433c-b0ef-2390ae708943" (UID: "70a1f5c9-312b-433c-b0ef-2390ae708943"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.353439 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a1f5c9-312b-433c-b0ef-2390ae708943-kube-api-access-gnlx9" (OuterVolumeSpecName: "kube-api-access-gnlx9") pod "70a1f5c9-312b-433c-b0ef-2390ae708943" (UID: "70a1f5c9-312b-433c-b0ef-2390ae708943"). InnerVolumeSpecName "kube-api-access-gnlx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.449680 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.449723 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnlx9\" (UniqueName: \"kubernetes.io/projected/70a1f5c9-312b-433c-b0ef-2390ae708943-kube-api-access-gnlx9\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.462128 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70a1f5c9-312b-433c-b0ef-2390ae708943" (UID: "70a1f5c9-312b-433c-b0ef-2390ae708943"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.551893 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a1f5c9-312b-433c-b0ef-2390ae708943-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.719758 4796 generic.go:334] "Generic (PLEG): container finished" podID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerID="fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed" exitCode=0 Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.719813 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndhm9" event={"ID":"70a1f5c9-312b-433c-b0ef-2390ae708943","Type":"ContainerDied","Data":"fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed"} Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.719871 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndhm9" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.719895 4796 scope.go:117] "RemoveContainer" containerID="fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.719875 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndhm9" event={"ID":"70a1f5c9-312b-433c-b0ef-2390ae708943","Type":"ContainerDied","Data":"b417b23d29b6c686d2c2363acb3036a0b648f8dc25af5ba4dfdd8660c201f8b0"} Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.757076 4796 scope.go:117] "RemoveContainer" containerID="68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.760048 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ndhm9"] Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.767635 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ndhm9"] Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.814996 4796 scope.go:117] "RemoveContainer" containerID="88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.839287 4796 scope.go:117] "RemoveContainer" containerID="fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed" Nov 25 14:59:36 crc kubenswrapper[4796]: E1125 14:59:36.839931 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed\": container with ID starting with fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed not found: ID does not exist" containerID="fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.839961 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed"} err="failed to get container status \"fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed\": rpc error: code = NotFound desc = could not find container \"fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed\": container with ID starting with fdfd12abcaef9177ccae44188278c505310841ef18260d3c5bdbd5f9ef88eeed not found: ID does not exist" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.839980 4796 scope.go:117] "RemoveContainer" containerID="68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1" Nov 25 14:59:36 crc kubenswrapper[4796]: E1125 14:59:36.840317 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1\": container with ID starting with 68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1 not found: ID does not exist" containerID="68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.840344 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1"} err="failed to get container status \"68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1\": rpc error: code = NotFound desc = could not find container \"68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1\": container with ID starting with 68895cd0d34f6f7eb59e1b34230b0d25436abd8ff6ffe94c3e79556be69f3eb1 not found: ID does not exist" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.840359 4796 scope.go:117] "RemoveContainer" containerID="88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01" Nov 25 14:59:36 crc kubenswrapper[4796]: E1125 14:59:36.840645 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01\": container with ID starting with 88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01 not found: ID does not exist" containerID="88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01" Nov 25 14:59:36 crc kubenswrapper[4796]: I1125 14:59:36.840673 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01"} err="failed to get container status \"88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01\": rpc error: code = NotFound desc = could not find container \"88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01\": container with ID starting with 88a39356e192b692f5eb33d2702401f89a873f58ce5cc23304cc26d65bdd6f01 not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4796]: I1125 14:59:38.421183 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a1f5c9-312b-433c-b0ef-2390ae708943" path="/var/lib/kubelet/pods/70a1f5c9-312b-433c-b0ef-2390ae708943/volumes" Nov 25 14:59:42 crc kubenswrapper[4796]: I1125 14:59:42.784567 4796 generic.go:334] "Generic (PLEG): container finished" podID="39a7e6ad-f344-409f-b5a0-664a602fdf66" containerID="30f709884c9c35baf0c6e8f104087a7fd832e80439dcb92e560cf3b5edeaf334" exitCode=0 Nov 25 14:59:42 crc kubenswrapper[4796]: I1125 14:59:42.784617 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" event={"ID":"39a7e6ad-f344-409f-b5a0-664a602fdf66","Type":"ContainerDied","Data":"30f709884c9c35baf0c6e8f104087a7fd832e80439dcb92e560cf3b5edeaf334"} Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.228911 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.296128 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-ssh-key\") pod \"39a7e6ad-f344-409f-b5a0-664a602fdf66\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.296284 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-inventory\") pod \"39a7e6ad-f344-409f-b5a0-664a602fdf66\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.296470 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n9hm\" (UniqueName: \"kubernetes.io/projected/39a7e6ad-f344-409f-b5a0-664a602fdf66-kube-api-access-6n9hm\") pod \"39a7e6ad-f344-409f-b5a0-664a602fdf66\" (UID: \"39a7e6ad-f344-409f-b5a0-664a602fdf66\") " Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.307813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a7e6ad-f344-409f-b5a0-664a602fdf66-kube-api-access-6n9hm" (OuterVolumeSpecName: "kube-api-access-6n9hm") pod "39a7e6ad-f344-409f-b5a0-664a602fdf66" (UID: "39a7e6ad-f344-409f-b5a0-664a602fdf66"). InnerVolumeSpecName "kube-api-access-6n9hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.328461 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-inventory" (OuterVolumeSpecName: "inventory") pod "39a7e6ad-f344-409f-b5a0-664a602fdf66" (UID: "39a7e6ad-f344-409f-b5a0-664a602fdf66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.328959 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "39a7e6ad-f344-409f-b5a0-664a602fdf66" (UID: "39a7e6ad-f344-409f-b5a0-664a602fdf66"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.399125 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n9hm\" (UniqueName: \"kubernetes.io/projected/39a7e6ad-f344-409f-b5a0-664a602fdf66-kube-api-access-6n9hm\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.399416 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.399431 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39a7e6ad-f344-409f-b5a0-664a602fdf66-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.811616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" event={"ID":"39a7e6ad-f344-409f-b5a0-664a602fdf66","Type":"ContainerDied","Data":"40f5e1a5791bb6266f0c20e017cd34d3787cc9e457ce67149131637e38928fbb"} Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.811653 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40f5e1a5791bb6266f0c20e017cd34d3787cc9e457ce67149131637e38928fbb" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.811713 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8pk87" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.874933 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q"] Nov 25 14:59:44 crc kubenswrapper[4796]: E1125 14:59:44.875301 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerName="extract-utilities" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.875320 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerName="extract-utilities" Nov 25 14:59:44 crc kubenswrapper[4796]: E1125 14:59:44.875337 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerName="registry-server" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.875343 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerName="registry-server" Nov 25 14:59:44 crc kubenswrapper[4796]: E1125 14:59:44.875357 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a7e6ad-f344-409f-b5a0-664a602fdf66" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.875363 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a7e6ad-f344-409f-b5a0-664a602fdf66" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:59:44 crc kubenswrapper[4796]: E1125 14:59:44.875385 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerName="extract-content" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.875391 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerName="extract-content" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.875555 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a1f5c9-312b-433c-b0ef-2390ae708943" containerName="registry-server" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.875567 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a7e6ad-f344-409f-b5a0-664a602fdf66" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.876279 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.878926 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.879033 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.879601 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.879725 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:59:44 crc kubenswrapper[4796]: I1125 14:59:44.894007 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q"] Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.010911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.011058 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.011256 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fw4v\" (UniqueName: \"kubernetes.io/projected/b8bdd873-343d-4d77-849e-14786c8db01d-kube-api-access-8fw4v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.113352 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fw4v\" (UniqueName: \"kubernetes.io/projected/b8bdd873-343d-4d77-849e-14786c8db01d-kube-api-access-8fw4v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.113423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.113501 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.119685 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.123467 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.128755 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fw4v\" (UniqueName: \"kubernetes.io/projected/b8bdd873-343d-4d77-849e-14786c8db01d-kube-api-access-8fw4v\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.195795 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.743312 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q"] Nov 25 14:59:45 crc kubenswrapper[4796]: I1125 14:59:45.820266 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" event={"ID":"b8bdd873-343d-4d77-849e-14786c8db01d","Type":"ContainerStarted","Data":"846c9c2f3a105709d45628917bff41353969ac06a7df359d9133898ddc41d4f1"} Nov 25 14:59:46 crc kubenswrapper[4796]: I1125 14:59:46.833719 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" event={"ID":"b8bdd873-343d-4d77-849e-14786c8db01d","Type":"ContainerStarted","Data":"96cc42412a40f1d3226fc70e8b9c3cfbed21338bb9f9f2be3bf85e6ff16d8400"} Nov 25 14:59:46 crc kubenswrapper[4796]: I1125 14:59:46.863611 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" podStartSLOduration=2.432655541 podStartE2EDuration="2.863547842s" podCreationTimestamp="2025-11-25 14:59:44 +0000 UTC" firstStartedPulling="2025-11-25 14:59:45.747633048 +0000 UTC m=+2114.090742482" lastFinishedPulling="2025-11-25 14:59:46.178525349 +0000 UTC m=+2114.521634783" observedRunningTime="2025-11-25 14:59:46.852232065 +0000 UTC m=+2115.195341499" watchObservedRunningTime="2025-11-25 14:59:46.863547842 +0000 UTC m=+2115.206657306" Nov 25 14:59:48 crc kubenswrapper[4796]: I1125 14:59:48.208198 4796 scope.go:117] "RemoveContainer" containerID="cc9e11f5c8e35863fa82011842073494fe45dbca8f3f4ed06e875bbc51a230a2" Nov 25 14:59:56 crc kubenswrapper[4796]: I1125 14:59:56.911881 4796 generic.go:334] "Generic (PLEG): container finished" podID="b8bdd873-343d-4d77-849e-14786c8db01d" containerID="96cc42412a40f1d3226fc70e8b9c3cfbed21338bb9f9f2be3bf85e6ff16d8400" exitCode=0 Nov 25 14:59:56 crc kubenswrapper[4796]: I1125 14:59:56.911929 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" event={"ID":"b8bdd873-343d-4d77-849e-14786c8db01d","Type":"ContainerDied","Data":"96cc42412a40f1d3226fc70e8b9c3cfbed21338bb9f9f2be3bf85e6ff16d8400"} Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.358712 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.488990 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-inventory\") pod \"b8bdd873-343d-4d77-849e-14786c8db01d\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.490040 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fw4v\" (UniqueName: \"kubernetes.io/projected/b8bdd873-343d-4d77-849e-14786c8db01d-kube-api-access-8fw4v\") pod \"b8bdd873-343d-4d77-849e-14786c8db01d\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.490123 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-ssh-key\") pod \"b8bdd873-343d-4d77-849e-14786c8db01d\" (UID: \"b8bdd873-343d-4d77-849e-14786c8db01d\") " Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.494988 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bdd873-343d-4d77-849e-14786c8db01d-kube-api-access-8fw4v" (OuterVolumeSpecName: "kube-api-access-8fw4v") pod "b8bdd873-343d-4d77-849e-14786c8db01d" (UID: "b8bdd873-343d-4d77-849e-14786c8db01d"). InnerVolumeSpecName "kube-api-access-8fw4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.519351 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b8bdd873-343d-4d77-849e-14786c8db01d" (UID: "b8bdd873-343d-4d77-849e-14786c8db01d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.519880 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-inventory" (OuterVolumeSpecName: "inventory") pod "b8bdd873-343d-4d77-849e-14786c8db01d" (UID: "b8bdd873-343d-4d77-849e-14786c8db01d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.592635 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.592686 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fw4v\" (UniqueName: \"kubernetes.io/projected/b8bdd873-343d-4d77-849e-14786c8db01d-kube-api-access-8fw4v\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.592697 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8bdd873-343d-4d77-849e-14786c8db01d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.942770 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" event={"ID":"b8bdd873-343d-4d77-849e-14786c8db01d","Type":"ContainerDied","Data":"846c9c2f3a105709d45628917bff41353969ac06a7df359d9133898ddc41d4f1"} Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.942819 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846c9c2f3a105709d45628917bff41353969ac06a7df359d9133898ddc41d4f1" Nov 25 14:59:58 crc kubenswrapper[4796]: I1125 14:59:58.942827 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.025047 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8"] Nov 25 14:59:59 crc kubenswrapper[4796]: E1125 14:59:59.026106 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bdd873-343d-4d77-849e-14786c8db01d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.026132 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bdd873-343d-4d77-849e-14786c8db01d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.026339 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bdd873-343d-4d77-849e-14786c8db01d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.028760 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.032081 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.032118 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.032176 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.032488 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.032662 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.032822 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.032977 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.033199 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.035927 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8"] Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.101659 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.101847 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.101924 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.101956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102028 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102092 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102322 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25k5\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-kube-api-access-d25k5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102514 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102628 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102685 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102733 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102782 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.102818 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.203693 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.203743 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.203764 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.203803 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.203846 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.203867 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.203900 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.203932 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.203953 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.204012 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.204057 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d25k5\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-kube-api-access-d25k5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.204104 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.204139 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.204159 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.208087 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.209682 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.209730 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.209800 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.209903 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.210548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.210640 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.210668 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.212258 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.212304 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.212256 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.224150 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.229257 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.238057 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d25k5\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-kube-api-access-d25k5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.351895 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.857587 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8"] Nov 25 14:59:59 crc kubenswrapper[4796]: I1125 14:59:59.950556 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" event={"ID":"552fef9f-5b94-4e45-9765-5b5e6ee62bfa","Type":"ContainerStarted","Data":"924bf935b5dbedb93303780ec87bce4cb0a3fc6932e6bbe9f6437d6159f88f53"} Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.149752 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm"] Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.151286 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.154826 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.158699 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.162401 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm"] Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.226958 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxzh\" (UniqueName: \"kubernetes.io/projected/7fa2092c-558b-4bb7-b7e3-51a0551e4755-kube-api-access-5pxzh\") pod \"collect-profiles-29401380-4hjpm\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.227252 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fa2092c-558b-4bb7-b7e3-51a0551e4755-config-volume\") pod \"collect-profiles-29401380-4hjpm\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.227450 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fa2092c-558b-4bb7-b7e3-51a0551e4755-secret-volume\") pod \"collect-profiles-29401380-4hjpm\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.328113 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fa2092c-558b-4bb7-b7e3-51a0551e4755-config-volume\") pod \"collect-profiles-29401380-4hjpm\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.328291 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fa2092c-558b-4bb7-b7e3-51a0551e4755-secret-volume\") pod \"collect-profiles-29401380-4hjpm\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.328364 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pxzh\" (UniqueName: \"kubernetes.io/projected/7fa2092c-558b-4bb7-b7e3-51a0551e4755-kube-api-access-5pxzh\") pod \"collect-profiles-29401380-4hjpm\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.329323 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fa2092c-558b-4bb7-b7e3-51a0551e4755-config-volume\") pod \"collect-profiles-29401380-4hjpm\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.338589 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fa2092c-558b-4bb7-b7e3-51a0551e4755-secret-volume\") pod \"collect-profiles-29401380-4hjpm\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.350959 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pxzh\" (UniqueName: \"kubernetes.io/projected/7fa2092c-558b-4bb7-b7e3-51a0551e4755-kube-api-access-5pxzh\") pod \"collect-profiles-29401380-4hjpm\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.536222 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.961511 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" event={"ID":"552fef9f-5b94-4e45-9765-5b5e6ee62bfa","Type":"ContainerStarted","Data":"b95b9df332a1a4eb71f826a23116526ad9a54b5f32b16ddca2c17348d4e06b13"} Nov 25 15:00:00 crc kubenswrapper[4796]: I1125 15:00:00.992818 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" podStartSLOduration=1.4703720599999999 podStartE2EDuration="1.992793309s" podCreationTimestamp="2025-11-25 14:59:59 +0000 UTC" firstStartedPulling="2025-11-25 14:59:59.862232934 +0000 UTC m=+2128.205342358" lastFinishedPulling="2025-11-25 15:00:00.384654183 +0000 UTC m=+2128.727763607" observedRunningTime="2025-11-25 15:00:00.98240137 +0000 UTC m=+2129.325510804" watchObservedRunningTime="2025-11-25 15:00:00.992793309 +0000 UTC m=+2129.335902733" Nov 25 15:00:01 crc kubenswrapper[4796]: I1125 15:00:01.013079 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm"] Nov 25 15:00:01 crc kubenswrapper[4796]: W1125 15:00:01.018688 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fa2092c_558b_4bb7_b7e3_51a0551e4755.slice/crio-1f780260da972fb7aeb1c2444bc5fd3221fd7bc3180292c8ce602d1d8409164e WatchSource:0}: Error finding container 1f780260da972fb7aeb1c2444bc5fd3221fd7bc3180292c8ce602d1d8409164e: Status 404 returned error can't find the container with id 1f780260da972fb7aeb1c2444bc5fd3221fd7bc3180292c8ce602d1d8409164e Nov 25 15:00:01 crc kubenswrapper[4796]: I1125 15:00:01.984947 4796 generic.go:334] "Generic (PLEG): container finished" podID="7fa2092c-558b-4bb7-b7e3-51a0551e4755" containerID="72fd7077cad7e0a696169acdaaa2a4337e9695ad84d1e0233ab6a56c9554879b" exitCode=0 Nov 25 15:00:01 crc kubenswrapper[4796]: I1125 15:00:01.985666 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" event={"ID":"7fa2092c-558b-4bb7-b7e3-51a0551e4755","Type":"ContainerDied","Data":"72fd7077cad7e0a696169acdaaa2a4337e9695ad84d1e0233ab6a56c9554879b"} Nov 25 15:00:01 crc kubenswrapper[4796]: I1125 15:00:01.985720 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" event={"ID":"7fa2092c-558b-4bb7-b7e3-51a0551e4755","Type":"ContainerStarted","Data":"1f780260da972fb7aeb1c2444bc5fd3221fd7bc3180292c8ce602d1d8409164e"} Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.345107 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.401344 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fa2092c-558b-4bb7-b7e3-51a0551e4755-secret-volume\") pod \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.401523 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fa2092c-558b-4bb7-b7e3-51a0551e4755-config-volume\") pod \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.401986 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pxzh\" (UniqueName: \"kubernetes.io/projected/7fa2092c-558b-4bb7-b7e3-51a0551e4755-kube-api-access-5pxzh\") pod \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\" (UID: \"7fa2092c-558b-4bb7-b7e3-51a0551e4755\") " Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.402413 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa2092c-558b-4bb7-b7e3-51a0551e4755-config-volume" (OuterVolumeSpecName: "config-volume") pod "7fa2092c-558b-4bb7-b7e3-51a0551e4755" (UID: "7fa2092c-558b-4bb7-b7e3-51a0551e4755"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.402798 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fa2092c-558b-4bb7-b7e3-51a0551e4755-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.407728 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa2092c-558b-4bb7-b7e3-51a0551e4755-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7fa2092c-558b-4bb7-b7e3-51a0551e4755" (UID: "7fa2092c-558b-4bb7-b7e3-51a0551e4755"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.408144 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa2092c-558b-4bb7-b7e3-51a0551e4755-kube-api-access-5pxzh" (OuterVolumeSpecName: "kube-api-access-5pxzh") pod "7fa2092c-558b-4bb7-b7e3-51a0551e4755" (UID: "7fa2092c-558b-4bb7-b7e3-51a0551e4755"). InnerVolumeSpecName "kube-api-access-5pxzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.504996 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pxzh\" (UniqueName: \"kubernetes.io/projected/7fa2092c-558b-4bb7-b7e3-51a0551e4755-kube-api-access-5pxzh\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:03 crc kubenswrapper[4796]: I1125 15:00:03.505236 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fa2092c-558b-4bb7-b7e3-51a0551e4755-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4796]: I1125 15:00:04.001373 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" event={"ID":"7fa2092c-558b-4bb7-b7e3-51a0551e4755","Type":"ContainerDied","Data":"1f780260da972fb7aeb1c2444bc5fd3221fd7bc3180292c8ce602d1d8409164e"} Nov 25 15:00:04 crc kubenswrapper[4796]: I1125 15:00:04.001417 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f780260da972fb7aeb1c2444bc5fd3221fd7bc3180292c8ce602d1d8409164e" Nov 25 15:00:04 crc kubenswrapper[4796]: I1125 15:00:04.001397 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-4hjpm" Nov 25 15:00:04 crc kubenswrapper[4796]: I1125 15:00:04.425707 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv"] Nov 25 15:00:04 crc kubenswrapper[4796]: I1125 15:00:04.434383 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401335-kz9sv"] Nov 25 15:00:06 crc kubenswrapper[4796]: I1125 15:00:06.422346 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab48abd-b847-4828-99f2-e9d7d3312e94" path="/var/lib/kubelet/pods/fab48abd-b847-4828-99f2-e9d7d3312e94/volumes" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.074488 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7s4lx"] Nov 25 15:00:37 crc kubenswrapper[4796]: E1125 15:00:37.075595 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa2092c-558b-4bb7-b7e3-51a0551e4755" containerName="collect-profiles" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.075612 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa2092c-558b-4bb7-b7e3-51a0551e4755" containerName="collect-profiles" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.075886 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa2092c-558b-4bb7-b7e3-51a0551e4755" containerName="collect-profiles" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.077935 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.092387 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7s4lx"] Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.189931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-catalog-content\") pod \"redhat-marketplace-7s4lx\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.189988 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn2g7\" (UniqueName: \"kubernetes.io/projected/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-kube-api-access-zn2g7\") pod \"redhat-marketplace-7s4lx\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.190193 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-utilities\") pod \"redhat-marketplace-7s4lx\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.292499 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-utilities\") pod \"redhat-marketplace-7s4lx\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.292596 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-catalog-content\") pod \"redhat-marketplace-7s4lx\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.292616 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn2g7\" (UniqueName: \"kubernetes.io/projected/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-kube-api-access-zn2g7\") pod \"redhat-marketplace-7s4lx\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.293408 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-catalog-content\") pod \"redhat-marketplace-7s4lx\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.293723 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-utilities\") pod \"redhat-marketplace-7s4lx\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.313450 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn2g7\" (UniqueName: \"kubernetes.io/projected/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-kube-api-access-zn2g7\") pod \"redhat-marketplace-7s4lx\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.411657 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:37 crc kubenswrapper[4796]: I1125 15:00:37.849521 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7s4lx"] Nov 25 15:00:38 crc kubenswrapper[4796]: I1125 15:00:38.319740 4796 generic.go:334] "Generic (PLEG): container finished" podID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerID="e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431" exitCode=0 Nov 25 15:00:38 crc kubenswrapper[4796]: I1125 15:00:38.319858 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7s4lx" event={"ID":"9d20e6c7-0a9e-4913-82a5-fbb076e2e225","Type":"ContainerDied","Data":"e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431"} Nov 25 15:00:38 crc kubenswrapper[4796]: I1125 15:00:38.320369 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7s4lx" event={"ID":"9d20e6c7-0a9e-4913-82a5-fbb076e2e225","Type":"ContainerStarted","Data":"70e6dec42f3ea1da4dd55756832fe1b57ad124e4f83aee1a3dabec2591f4e574"} Nov 25 15:00:38 crc kubenswrapper[4796]: I1125 15:00:38.323446 4796 generic.go:334] "Generic (PLEG): container finished" podID="552fef9f-5b94-4e45-9765-5b5e6ee62bfa" containerID="b95b9df332a1a4eb71f826a23116526ad9a54b5f32b16ddca2c17348d4e06b13" exitCode=0 Nov 25 15:00:38 crc kubenswrapper[4796]: I1125 15:00:38.323525 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" event={"ID":"552fef9f-5b94-4e45-9765-5b5e6ee62bfa","Type":"ContainerDied","Data":"b95b9df332a1a4eb71f826a23116526ad9a54b5f32b16ddca2c17348d4e06b13"} Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.345171 4796 generic.go:334] "Generic (PLEG): container finished" podID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerID="0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17" exitCode=0 Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.345210 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7s4lx" event={"ID":"9d20e6c7-0a9e-4913-82a5-fbb076e2e225","Type":"ContainerDied","Data":"0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17"} Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.756773 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843032 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843162 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ovn-combined-ca-bundle\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843251 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-neutron-metadata-combined-ca-bundle\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843314 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-repo-setup-combined-ca-bundle\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843341 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843407 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-telemetry-combined-ca-bundle\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843434 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-libvirt-combined-ca-bundle\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843477 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-bootstrap-combined-ca-bundle\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843506 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-ovn-default-certs-0\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843594 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-inventory\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843617 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d25k5\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-kube-api-access-d25k5\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843649 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ssh-key\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.843691 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-nova-combined-ca-bundle\") pod \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\" (UID: \"552fef9f-5b94-4e45-9765-5b5e6ee62bfa\") " Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.850815 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.853458 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-kube-api-access-d25k5" (OuterVolumeSpecName: "kube-api-access-d25k5") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "kube-api-access-d25k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.853952 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.854075 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.854344 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.854662 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.854681 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.854940 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.855594 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.856330 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.859843 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.862693 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.876482 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.884837 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-inventory" (OuterVolumeSpecName: "inventory") pod "552fef9f-5b94-4e45-9765-5b5e6ee62bfa" (UID: "552fef9f-5b94-4e45-9765-5b5e6ee62bfa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946379 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946421 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946435 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d25k5\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-kube-api-access-d25k5\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946448 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946460 4796 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946474 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946488 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946501 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946515 4796 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946529 4796 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946542 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946554 4796 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946566 4796 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:39 crc kubenswrapper[4796]: I1125 15:00:39.946593 4796 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/552fef9f-5b94-4e45-9765-5b5e6ee62bfa-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.356031 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" event={"ID":"552fef9f-5b94-4e45-9765-5b5e6ee62bfa","Type":"ContainerDied","Data":"924bf935b5dbedb93303780ec87bce4cb0a3fc6932e6bbe9f6437d6159f88f53"} Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.356326 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="924bf935b5dbedb93303780ec87bce4cb0a3fc6932e6bbe9f6437d6159f88f53" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.356095 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.359876 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7s4lx" event={"ID":"9d20e6c7-0a9e-4913-82a5-fbb076e2e225","Type":"ContainerStarted","Data":"9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a"} Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.395391 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7s4lx" podStartSLOduration=1.788528954 podStartE2EDuration="3.395360444s" podCreationTimestamp="2025-11-25 15:00:37 +0000 UTC" firstStartedPulling="2025-11-25 15:00:38.323244311 +0000 UTC m=+2166.666353735" lastFinishedPulling="2025-11-25 15:00:39.930075801 +0000 UTC m=+2168.273185225" observedRunningTime="2025-11-25 15:00:40.383411199 +0000 UTC m=+2168.726520623" watchObservedRunningTime="2025-11-25 15:00:40.395360444 +0000 UTC m=+2168.738469908" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.472229 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6"] Nov 25 15:00:40 crc kubenswrapper[4796]: E1125 15:00:40.472737 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552fef9f-5b94-4e45-9765-5b5e6ee62bfa" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.472760 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="552fef9f-5b94-4e45-9765-5b5e6ee62bfa" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.473037 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="552fef9f-5b94-4e45-9765-5b5e6ee62bfa" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.474999 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.476957 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.477401 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.478338 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.479168 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.484026 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6"] Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.486850 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.557228 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.557332 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a07af2cf-4057-4032-8535-6e8067892269-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.557522 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.558450 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqr7\" (UniqueName: \"kubernetes.io/projected/a07af2cf-4057-4032-8535-6e8067892269-kube-api-access-crqr7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.558489 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.660167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crqr7\" (UniqueName: \"kubernetes.io/projected/a07af2cf-4057-4032-8535-6e8067892269-kube-api-access-crqr7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.660246 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.660379 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.660418 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a07af2cf-4057-4032-8535-6e8067892269-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.660469 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.661472 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a07af2cf-4057-4032-8535-6e8067892269-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.671178 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.671395 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.671900 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.684246 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crqr7\" (UniqueName: \"kubernetes.io/projected/a07af2cf-4057-4032-8535-6e8067892269-kube-api-access-crqr7\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fwnf6\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:40 crc kubenswrapper[4796]: I1125 15:00:40.789837 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:00:41 crc kubenswrapper[4796]: I1125 15:00:41.321211 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6"] Nov 25 15:00:41 crc kubenswrapper[4796]: W1125 15:00:41.334936 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda07af2cf_4057_4032_8535_6e8067892269.slice/crio-626222be09144ff1396d01792300b4bbc91461bd992e3f131ef6f435fa9a6fe8 WatchSource:0}: Error finding container 626222be09144ff1396d01792300b4bbc91461bd992e3f131ef6f435fa9a6fe8: Status 404 returned error can't find the container with id 626222be09144ff1396d01792300b4bbc91461bd992e3f131ef6f435fa9a6fe8 Nov 25 15:00:41 crc kubenswrapper[4796]: I1125 15:00:41.370617 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" event={"ID":"a07af2cf-4057-4032-8535-6e8067892269","Type":"ContainerStarted","Data":"626222be09144ff1396d01792300b4bbc91461bd992e3f131ef6f435fa9a6fe8"} Nov 25 15:00:42 crc kubenswrapper[4796]: I1125 15:00:42.378036 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" event={"ID":"a07af2cf-4057-4032-8535-6e8067892269","Type":"ContainerStarted","Data":"a1b3305347aeaf583d7a75632a2a1bfffa3dff6cfd3388320c52a752ac8f3535"} Nov 25 15:00:42 crc kubenswrapper[4796]: I1125 15:00:42.394285 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" podStartSLOduration=1.896459735 podStartE2EDuration="2.394260635s" podCreationTimestamp="2025-11-25 15:00:40 +0000 UTC" firstStartedPulling="2025-11-25 15:00:41.336725956 +0000 UTC m=+2169.679835380" lastFinishedPulling="2025-11-25 15:00:41.834526856 +0000 UTC m=+2170.177636280" observedRunningTime="2025-11-25 15:00:42.390847148 +0000 UTC m=+2170.733956572" watchObservedRunningTime="2025-11-25 15:00:42.394260635 +0000 UTC m=+2170.737370059" Nov 25 15:00:47 crc kubenswrapper[4796]: I1125 15:00:47.412615 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:47 crc kubenswrapper[4796]: I1125 15:00:47.413162 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:47 crc kubenswrapper[4796]: I1125 15:00:47.476060 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:47 crc kubenswrapper[4796]: I1125 15:00:47.527688 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:48 crc kubenswrapper[4796]: I1125 15:00:48.294831 4796 scope.go:117] "RemoveContainer" containerID="c7e1f60ff6e8f6f667659e5dc9896c30689bb50f7e829fc825e978d0e3736b5d" Nov 25 15:00:50 crc kubenswrapper[4796]: I1125 15:00:50.847697 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7s4lx"] Nov 25 15:00:50 crc kubenswrapper[4796]: I1125 15:00:50.849982 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7s4lx" podUID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerName="registry-server" containerID="cri-o://9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a" gracePeriod=2 Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.293322 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.389202 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-catalog-content\") pod \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.389346 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn2g7\" (UniqueName: \"kubernetes.io/projected/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-kube-api-access-zn2g7\") pod \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.389663 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-utilities\") pod \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\" (UID: \"9d20e6c7-0a9e-4913-82a5-fbb076e2e225\") " Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.390552 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-utilities" (OuterVolumeSpecName: "utilities") pod "9d20e6c7-0a9e-4913-82a5-fbb076e2e225" (UID: "9d20e6c7-0a9e-4913-82a5-fbb076e2e225"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.395479 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-kube-api-access-zn2g7" (OuterVolumeSpecName: "kube-api-access-zn2g7") pod "9d20e6c7-0a9e-4913-82a5-fbb076e2e225" (UID: "9d20e6c7-0a9e-4913-82a5-fbb076e2e225"). InnerVolumeSpecName "kube-api-access-zn2g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.420879 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d20e6c7-0a9e-4913-82a5-fbb076e2e225" (UID: "9d20e6c7-0a9e-4913-82a5-fbb076e2e225"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.473563 4796 generic.go:334] "Generic (PLEG): container finished" podID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerID="9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a" exitCode=0 Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.473655 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7s4lx" event={"ID":"9d20e6c7-0a9e-4913-82a5-fbb076e2e225","Type":"ContainerDied","Data":"9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a"} Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.473709 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7s4lx" event={"ID":"9d20e6c7-0a9e-4913-82a5-fbb076e2e225","Type":"ContainerDied","Data":"70e6dec42f3ea1da4dd55756832fe1b57ad124e4f83aee1a3dabec2591f4e574"} Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.473740 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7s4lx" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.473747 4796 scope.go:117] "RemoveContainer" containerID="9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.492293 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.492348 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn2g7\" (UniqueName: \"kubernetes.io/projected/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-kube-api-access-zn2g7\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.492360 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d20e6c7-0a9e-4913-82a5-fbb076e2e225-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.502627 4796 scope.go:117] "RemoveContainer" containerID="0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.548557 4796 scope.go:117] "RemoveContainer" containerID="e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.549390 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7s4lx"] Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.557255 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7s4lx"] Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.583147 4796 scope.go:117] "RemoveContainer" containerID="9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a" Nov 25 15:00:51 crc kubenswrapper[4796]: E1125 15:00:51.583666 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a\": container with ID starting with 9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a not found: ID does not exist" containerID="9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.583707 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a"} err="failed to get container status \"9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a\": rpc error: code = NotFound desc = could not find container \"9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a\": container with ID starting with 9d450ab64fc6a9bdf01fb828aaca1e104fea19bd372b9d39e0820b5e5a9b180a not found: ID does not exist" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.583734 4796 scope.go:117] "RemoveContainer" containerID="0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17" Nov 25 15:00:51 crc kubenswrapper[4796]: E1125 15:00:51.583976 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17\": container with ID starting with 0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17 not found: ID does not exist" containerID="0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.584008 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17"} err="failed to get container status \"0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17\": rpc error: code = NotFound desc = could not find container \"0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17\": container with ID starting with 0d512f95aa1bd52caf8a139bf220dee2d0cc8147965b5314ecf9054bf9541f17 not found: ID does not exist" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.584034 4796 scope.go:117] "RemoveContainer" containerID="e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431" Nov 25 15:00:51 crc kubenswrapper[4796]: E1125 15:00:51.584412 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431\": container with ID starting with e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431 not found: ID does not exist" containerID="e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431" Nov 25 15:00:51 crc kubenswrapper[4796]: I1125 15:00:51.584435 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431"} err="failed to get container status \"e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431\": rpc error: code = NotFound desc = could not find container \"e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431\": container with ID starting with e97906138c4bfedab2274288a01e66476add359f5f720fff7bdfe871205fd431 not found: ID does not exist" Nov 25 15:00:52 crc kubenswrapper[4796]: I1125 15:00:52.419796 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" path="/var/lib/kubelet/pods/9d20e6c7-0a9e-4913-82a5-fbb076e2e225/volumes" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.150318 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401381-s9lbp"] Nov 25 15:01:00 crc kubenswrapper[4796]: E1125 15:01:00.151315 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerName="registry-server" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.151338 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerName="registry-server" Nov 25 15:01:00 crc kubenswrapper[4796]: E1125 15:01:00.151370 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerName="extract-content" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.151378 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerName="extract-content" Nov 25 15:01:00 crc kubenswrapper[4796]: E1125 15:01:00.151390 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerName="extract-utilities" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.151397 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerName="extract-utilities" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.151809 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d20e6c7-0a9e-4913-82a5-fbb076e2e225" containerName="registry-server" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.152509 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.167940 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401381-s9lbp"] Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.316561 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llw48\" (UniqueName: \"kubernetes.io/projected/72d4d931-5b18-49ad-a427-9997259fc320-kube-api-access-llw48\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.316647 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-combined-ca-bundle\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.316763 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-fernet-keys\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.316839 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-config-data\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.418152 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-config-data\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.418274 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llw48\" (UniqueName: \"kubernetes.io/projected/72d4d931-5b18-49ad-a427-9997259fc320-kube-api-access-llw48\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.418315 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-combined-ca-bundle\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.418381 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-fernet-keys\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.427098 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-fernet-keys\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.427189 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-config-data\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.427260 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-combined-ca-bundle\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.439434 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llw48\" (UniqueName: \"kubernetes.io/projected/72d4d931-5b18-49ad-a427-9997259fc320-kube-api-access-llw48\") pod \"keystone-cron-29401381-s9lbp\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.485122 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:00 crc kubenswrapper[4796]: I1125 15:01:00.935459 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401381-s9lbp"] Nov 25 15:01:01 crc kubenswrapper[4796]: I1125 15:01:01.577768 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401381-s9lbp" event={"ID":"72d4d931-5b18-49ad-a427-9997259fc320","Type":"ContainerStarted","Data":"54d2816f5f951ccfeabb7c8d2d955d47f7968f5fa6a3f9657b82b7bc70730560"} Nov 25 15:01:01 crc kubenswrapper[4796]: I1125 15:01:01.578030 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401381-s9lbp" event={"ID":"72d4d931-5b18-49ad-a427-9997259fc320","Type":"ContainerStarted","Data":"ace880d6244c1122891803189bcaf429366441f106fec1afe75a0ee791cbacb4"} Nov 25 15:01:01 crc kubenswrapper[4796]: I1125 15:01:01.596505 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401381-s9lbp" podStartSLOduration=1.5964865879999999 podStartE2EDuration="1.596486588s" podCreationTimestamp="2025-11-25 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:01:01.595490438 +0000 UTC m=+2189.938599862" watchObservedRunningTime="2025-11-25 15:01:01.596486588 +0000 UTC m=+2189.939596002" Nov 25 15:01:03 crc kubenswrapper[4796]: I1125 15:01:03.600064 4796 generic.go:334] "Generic (PLEG): container finished" podID="72d4d931-5b18-49ad-a427-9997259fc320" containerID="54d2816f5f951ccfeabb7c8d2d955d47f7968f5fa6a3f9657b82b7bc70730560" exitCode=0 Nov 25 15:01:03 crc kubenswrapper[4796]: I1125 15:01:03.600189 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401381-s9lbp" event={"ID":"72d4d931-5b18-49ad-a427-9997259fc320","Type":"ContainerDied","Data":"54d2816f5f951ccfeabb7c8d2d955d47f7968f5fa6a3f9657b82b7bc70730560"} Nov 25 15:01:04 crc kubenswrapper[4796]: I1125 15:01:04.942387 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.116176 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-config-data\") pod \"72d4d931-5b18-49ad-a427-9997259fc320\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.116419 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llw48\" (UniqueName: \"kubernetes.io/projected/72d4d931-5b18-49ad-a427-9997259fc320-kube-api-access-llw48\") pod \"72d4d931-5b18-49ad-a427-9997259fc320\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.116509 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-combined-ca-bundle\") pod \"72d4d931-5b18-49ad-a427-9997259fc320\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.116564 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-fernet-keys\") pod \"72d4d931-5b18-49ad-a427-9997259fc320\" (UID: \"72d4d931-5b18-49ad-a427-9997259fc320\") " Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.123925 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d4d931-5b18-49ad-a427-9997259fc320-kube-api-access-llw48" (OuterVolumeSpecName: "kube-api-access-llw48") pod "72d4d931-5b18-49ad-a427-9997259fc320" (UID: "72d4d931-5b18-49ad-a427-9997259fc320"). InnerVolumeSpecName "kube-api-access-llw48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.124316 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "72d4d931-5b18-49ad-a427-9997259fc320" (UID: "72d4d931-5b18-49ad-a427-9997259fc320"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.151593 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72d4d931-5b18-49ad-a427-9997259fc320" (UID: "72d4d931-5b18-49ad-a427-9997259fc320"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.177247 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-config-data" (OuterVolumeSpecName: "config-data") pod "72d4d931-5b18-49ad-a427-9997259fc320" (UID: "72d4d931-5b18-49ad-a427-9997259fc320"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.219233 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.219296 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.219317 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llw48\" (UniqueName: \"kubernetes.io/projected/72d4d931-5b18-49ad-a427-9997259fc320-kube-api-access-llw48\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.219337 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d4d931-5b18-49ad-a427-9997259fc320-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.618805 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401381-s9lbp" event={"ID":"72d4d931-5b18-49ad-a427-9997259fc320","Type":"ContainerDied","Data":"ace880d6244c1122891803189bcaf429366441f106fec1afe75a0ee791cbacb4"} Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.619197 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ace880d6244c1122891803189bcaf429366441f106fec1afe75a0ee791cbacb4" Nov 25 15:01:05 crc kubenswrapper[4796]: I1125 15:01:05.618861 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401381-s9lbp" Nov 25 15:01:19 crc kubenswrapper[4796]: I1125 15:01:19.514271 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:01:19 crc kubenswrapper[4796]: I1125 15:01:19.514884 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:01:48 crc kubenswrapper[4796]: I1125 15:01:48.064552 4796 generic.go:334] "Generic (PLEG): container finished" podID="a07af2cf-4057-4032-8535-6e8067892269" containerID="a1b3305347aeaf583d7a75632a2a1bfffa3dff6cfd3388320c52a752ac8f3535" exitCode=0 Nov 25 15:01:48 crc kubenswrapper[4796]: I1125 15:01:48.064649 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" event={"ID":"a07af2cf-4057-4032-8535-6e8067892269","Type":"ContainerDied","Data":"a1b3305347aeaf583d7a75632a2a1bfffa3dff6cfd3388320c52a752ac8f3535"} Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.514314 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.514643 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.590522 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.735780 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a07af2cf-4057-4032-8535-6e8067892269-ovncontroller-config-0\") pod \"a07af2cf-4057-4032-8535-6e8067892269\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.735980 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crqr7\" (UniqueName: \"kubernetes.io/projected/a07af2cf-4057-4032-8535-6e8067892269-kube-api-access-crqr7\") pod \"a07af2cf-4057-4032-8535-6e8067892269\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.736070 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ovn-combined-ca-bundle\") pod \"a07af2cf-4057-4032-8535-6e8067892269\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.736090 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ssh-key\") pod \"a07af2cf-4057-4032-8535-6e8067892269\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.736119 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-inventory\") pod \"a07af2cf-4057-4032-8535-6e8067892269\" (UID: \"a07af2cf-4057-4032-8535-6e8067892269\") " Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.744044 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a07af2cf-4057-4032-8535-6e8067892269" (UID: "a07af2cf-4057-4032-8535-6e8067892269"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.744685 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07af2cf-4057-4032-8535-6e8067892269-kube-api-access-crqr7" (OuterVolumeSpecName: "kube-api-access-crqr7") pod "a07af2cf-4057-4032-8535-6e8067892269" (UID: "a07af2cf-4057-4032-8535-6e8067892269"). InnerVolumeSpecName "kube-api-access-crqr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.768811 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-inventory" (OuterVolumeSpecName: "inventory") pod "a07af2cf-4057-4032-8535-6e8067892269" (UID: "a07af2cf-4057-4032-8535-6e8067892269"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.769561 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a07af2cf-4057-4032-8535-6e8067892269" (UID: "a07af2cf-4057-4032-8535-6e8067892269"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.789637 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07af2cf-4057-4032-8535-6e8067892269-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a07af2cf-4057-4032-8535-6e8067892269" (UID: "a07af2cf-4057-4032-8535-6e8067892269"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.838590 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crqr7\" (UniqueName: \"kubernetes.io/projected/a07af2cf-4057-4032-8535-6e8067892269-kube-api-access-crqr7\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.838858 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.838972 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.839060 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a07af2cf-4057-4032-8535-6e8067892269-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:49 crc kubenswrapper[4796]: I1125 15:01:49.839137 4796 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a07af2cf-4057-4032-8535-6e8067892269-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.088780 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" event={"ID":"a07af2cf-4057-4032-8535-6e8067892269","Type":"ContainerDied","Data":"626222be09144ff1396d01792300b4bbc91461bd992e3f131ef6f435fa9a6fe8"} Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.089312 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="626222be09144ff1396d01792300b4bbc91461bd992e3f131ef6f435fa9a6fe8" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.088949 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fwnf6" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.229881 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt"] Nov 25 15:01:50 crc kubenswrapper[4796]: E1125 15:01:50.230509 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07af2cf-4057-4032-8535-6e8067892269" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.230528 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07af2cf-4057-4032-8535-6e8067892269" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 15:01:50 crc kubenswrapper[4796]: E1125 15:01:50.230593 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d4d931-5b18-49ad-a427-9997259fc320" containerName="keystone-cron" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.230605 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d4d931-5b18-49ad-a427-9997259fc320" containerName="keystone-cron" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.230884 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d4d931-5b18-49ad-a427-9997259fc320" containerName="keystone-cron" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.230917 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07af2cf-4057-4032-8535-6e8067892269" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.231686 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.235975 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.236044 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.236231 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.236400 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.236474 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.237022 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.244844 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt"] Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.350354 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.350438 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sngvk\" (UniqueName: \"kubernetes.io/projected/3a001f8e-537d-4c17-88cd-b1c2a8727074-kube-api-access-sngvk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.350669 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.350866 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.350900 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.350943 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.453433 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.453512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.453677 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.453727 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.453864 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sngvk\" (UniqueName: \"kubernetes.io/projected/3a001f8e-537d-4c17-88cd-b1c2a8727074-kube-api-access-sngvk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.453979 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.459102 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.459151 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.460063 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.460091 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.466699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.474204 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sngvk\" (UniqueName: \"kubernetes.io/projected/3a001f8e-537d-4c17-88cd-b1c2a8727074-kube-api-access-sngvk\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:50 crc kubenswrapper[4796]: I1125 15:01:50.556700 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:01:51 crc kubenswrapper[4796]: I1125 15:01:51.086125 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt"] Nov 25 15:01:51 crc kubenswrapper[4796]: I1125 15:01:51.098112 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" event={"ID":"3a001f8e-537d-4c17-88cd-b1c2a8727074","Type":"ContainerStarted","Data":"c349ada872e0949543c06323ea452d0cacbe33e0b283d8f70a0c316c0c0db7a0"} Nov 25 15:01:52 crc kubenswrapper[4796]: I1125 15:01:52.123835 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" event={"ID":"3a001f8e-537d-4c17-88cd-b1c2a8727074","Type":"ContainerStarted","Data":"1d2cae56f6faf26f2297ae5bb1f4f533e955bee0643563173c379d313020502f"} Nov 25 15:01:52 crc kubenswrapper[4796]: I1125 15:01:52.155600 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" podStartSLOduration=1.622118237 podStartE2EDuration="2.155554622s" podCreationTimestamp="2025-11-25 15:01:50 +0000 UTC" firstStartedPulling="2025-11-25 15:01:51.085793161 +0000 UTC m=+2239.428902605" lastFinishedPulling="2025-11-25 15:01:51.619229576 +0000 UTC m=+2239.962338990" observedRunningTime="2025-11-25 15:01:52.146520549 +0000 UTC m=+2240.489630013" watchObservedRunningTime="2025-11-25 15:01:52.155554622 +0000 UTC m=+2240.498664036" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.320349 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r65qw"] Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.324630 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.337458 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r65qw"] Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.385493 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-utilities\") pod \"community-operators-r65qw\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.385635 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr54k\" (UniqueName: \"kubernetes.io/projected/19662a6d-c366-4a79-9301-2a474d54792f-kube-api-access-hr54k\") pod \"community-operators-r65qw\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.385657 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-catalog-content\") pod \"community-operators-r65qw\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.486699 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr54k\" (UniqueName: \"kubernetes.io/projected/19662a6d-c366-4a79-9301-2a474d54792f-kube-api-access-hr54k\") pod \"community-operators-r65qw\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.486742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-catalog-content\") pod \"community-operators-r65qw\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.486891 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-utilities\") pod \"community-operators-r65qw\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.488241 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-utilities\") pod \"community-operators-r65qw\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.488447 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-catalog-content\") pod \"community-operators-r65qw\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.511386 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr54k\" (UniqueName: \"kubernetes.io/projected/19662a6d-c366-4a79-9301-2a474d54792f-kube-api-access-hr54k\") pod \"community-operators-r65qw\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:07 crc kubenswrapper[4796]: I1125 15:02:07.677627 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:08 crc kubenswrapper[4796]: I1125 15:02:08.193493 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r65qw"] Nov 25 15:02:08 crc kubenswrapper[4796]: I1125 15:02:08.290591 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r65qw" event={"ID":"19662a6d-c366-4a79-9301-2a474d54792f","Type":"ContainerStarted","Data":"debe79669088fcd6c4da0ad859a7eb835209ea6e682625760330e3d2208b746a"} Nov 25 15:02:09 crc kubenswrapper[4796]: I1125 15:02:09.305846 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r65qw" event={"ID":"19662a6d-c366-4a79-9301-2a474d54792f","Type":"ContainerDied","Data":"9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197"} Nov 25 15:02:09 crc kubenswrapper[4796]: I1125 15:02:09.307144 4796 generic.go:334] "Generic (PLEG): container finished" podID="19662a6d-c366-4a79-9301-2a474d54792f" containerID="9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197" exitCode=0 Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.497713 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tng8f"] Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.502780 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.507183 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tng8f"] Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.652983 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-catalog-content\") pod \"certified-operators-tng8f\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.653390 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-utilities\") pod \"certified-operators-tng8f\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.653692 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp8nt\" (UniqueName: \"kubernetes.io/projected/52a93d86-ca08-461e-ace3-59fad719676a-kube-api-access-tp8nt\") pod \"certified-operators-tng8f\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.755534 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-catalog-content\") pod \"certified-operators-tng8f\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.755652 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-utilities\") pod \"certified-operators-tng8f\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.755713 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp8nt\" (UniqueName: \"kubernetes.io/projected/52a93d86-ca08-461e-ace3-59fad719676a-kube-api-access-tp8nt\") pod \"certified-operators-tng8f\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.756307 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-catalog-content\") pod \"certified-operators-tng8f\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.756349 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-utilities\") pod \"certified-operators-tng8f\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.781453 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp8nt\" (UniqueName: \"kubernetes.io/projected/52a93d86-ca08-461e-ace3-59fad719676a-kube-api-access-tp8nt\") pod \"certified-operators-tng8f\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:10 crc kubenswrapper[4796]: I1125 15:02:10.826452 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:11 crc kubenswrapper[4796]: I1125 15:02:11.367971 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tng8f"] Nov 25 15:02:11 crc kubenswrapper[4796]: W1125 15:02:11.373383 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52a93d86_ca08_461e_ace3_59fad719676a.slice/crio-4c33f72bddcada5eceac1e1b326e3d916c34f39be8aa9cca04beff3b2223e66a WatchSource:0}: Error finding container 4c33f72bddcada5eceac1e1b326e3d916c34f39be8aa9cca04beff3b2223e66a: Status 404 returned error can't find the container with id 4c33f72bddcada5eceac1e1b326e3d916c34f39be8aa9cca04beff3b2223e66a Nov 25 15:02:12 crc kubenswrapper[4796]: I1125 15:02:12.335827 4796 generic.go:334] "Generic (PLEG): container finished" podID="52a93d86-ca08-461e-ace3-59fad719676a" containerID="155737d8804873566d39ee455122fe259409e701fb5b107dc133d651ed70df23" exitCode=0 Nov 25 15:02:12 crc kubenswrapper[4796]: I1125 15:02:12.335945 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tng8f" event={"ID":"52a93d86-ca08-461e-ace3-59fad719676a","Type":"ContainerDied","Data":"155737d8804873566d39ee455122fe259409e701fb5b107dc133d651ed70df23"} Nov 25 15:02:12 crc kubenswrapper[4796]: I1125 15:02:12.336429 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tng8f" event={"ID":"52a93d86-ca08-461e-ace3-59fad719676a","Type":"ContainerStarted","Data":"4c33f72bddcada5eceac1e1b326e3d916c34f39be8aa9cca04beff3b2223e66a"} Nov 25 15:02:14 crc kubenswrapper[4796]: I1125 15:02:14.367267 4796 generic.go:334] "Generic (PLEG): container finished" podID="19662a6d-c366-4a79-9301-2a474d54792f" containerID="5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f" exitCode=0 Nov 25 15:02:14 crc kubenswrapper[4796]: I1125 15:02:14.367323 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r65qw" event={"ID":"19662a6d-c366-4a79-9301-2a474d54792f","Type":"ContainerDied","Data":"5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f"} Nov 25 15:02:16 crc kubenswrapper[4796]: I1125 15:02:16.391568 4796 generic.go:334] "Generic (PLEG): container finished" podID="52a93d86-ca08-461e-ace3-59fad719676a" containerID="d7f2a40b7cbed71440c1a6a9ce18e20e24f0886ae4a28b3959cb0dfd42a8dd03" exitCode=0 Nov 25 15:02:16 crc kubenswrapper[4796]: I1125 15:02:16.391754 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tng8f" event={"ID":"52a93d86-ca08-461e-ace3-59fad719676a","Type":"ContainerDied","Data":"d7f2a40b7cbed71440c1a6a9ce18e20e24f0886ae4a28b3959cb0dfd42a8dd03"} Nov 25 15:02:17 crc kubenswrapper[4796]: I1125 15:02:17.403178 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tng8f" event={"ID":"52a93d86-ca08-461e-ace3-59fad719676a","Type":"ContainerStarted","Data":"cb921eed2408dcbd5d0407ce42ae453c7f78ef6e95f50a5b61738d10c6d09731"} Nov 25 15:02:17 crc kubenswrapper[4796]: I1125 15:02:17.405924 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r65qw" event={"ID":"19662a6d-c366-4a79-9301-2a474d54792f","Type":"ContainerStarted","Data":"83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f"} Nov 25 15:02:17 crc kubenswrapper[4796]: I1125 15:02:17.423081 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tng8f" podStartSLOduration=3.7416254860000002 podStartE2EDuration="7.423060707s" podCreationTimestamp="2025-11-25 15:02:10 +0000 UTC" firstStartedPulling="2025-11-25 15:02:13.205970501 +0000 UTC m=+2261.549079925" lastFinishedPulling="2025-11-25 15:02:16.887405722 +0000 UTC m=+2265.230515146" observedRunningTime="2025-11-25 15:02:17.419695111 +0000 UTC m=+2265.762804545" watchObservedRunningTime="2025-11-25 15:02:17.423060707 +0000 UTC m=+2265.766170131" Nov 25 15:02:17 crc kubenswrapper[4796]: I1125 15:02:17.461396 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r65qw" podStartSLOduration=3.716549389 podStartE2EDuration="10.45445571s" podCreationTimestamp="2025-11-25 15:02:07 +0000 UTC" firstStartedPulling="2025-11-25 15:02:09.309299965 +0000 UTC m=+2257.652409399" lastFinishedPulling="2025-11-25 15:02:16.047206306 +0000 UTC m=+2264.390315720" observedRunningTime="2025-11-25 15:02:17.449696881 +0000 UTC m=+2265.792806325" watchObservedRunningTime="2025-11-25 15:02:17.45445571 +0000 UTC m=+2265.797565144" Nov 25 15:02:17 crc kubenswrapper[4796]: I1125 15:02:17.680180 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:17 crc kubenswrapper[4796]: I1125 15:02:17.680236 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:18 crc kubenswrapper[4796]: I1125 15:02:18.736715 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r65qw" podUID="19662a6d-c366-4a79-9301-2a474d54792f" containerName="registry-server" probeResult="failure" output=< Nov 25 15:02:18 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Nov 25 15:02:18 crc kubenswrapper[4796]: > Nov 25 15:02:19 crc kubenswrapper[4796]: I1125 15:02:19.514144 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:02:19 crc kubenswrapper[4796]: I1125 15:02:19.514243 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:02:19 crc kubenswrapper[4796]: I1125 15:02:19.514308 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 15:02:19 crc kubenswrapper[4796]: I1125 15:02:19.515527 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:02:19 crc kubenswrapper[4796]: I1125 15:02:19.515620 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" gracePeriod=600 Nov 25 15:02:19 crc kubenswrapper[4796]: E1125 15:02:19.637439 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:02:20 crc kubenswrapper[4796]: I1125 15:02:20.438185 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" exitCode=0 Nov 25 15:02:20 crc kubenswrapper[4796]: I1125 15:02:20.438234 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0"} Nov 25 15:02:20 crc kubenswrapper[4796]: I1125 15:02:20.438336 4796 scope.go:117] "RemoveContainer" containerID="70d0d78805ff6ee84a8ea3338031d4b970ef33f4c653ba7d63ee0c9fa7a78f92" Nov 25 15:02:20 crc kubenswrapper[4796]: I1125 15:02:20.439177 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:02:20 crc kubenswrapper[4796]: E1125 15:02:20.439541 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:02:20 crc kubenswrapper[4796]: I1125 15:02:20.827762 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:20 crc kubenswrapper[4796]: I1125 15:02:20.827852 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:20 crc kubenswrapper[4796]: I1125 15:02:20.889085 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:27 crc kubenswrapper[4796]: I1125 15:02:27.755004 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:27 crc kubenswrapper[4796]: I1125 15:02:27.837155 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:02:27 crc kubenswrapper[4796]: I1125 15:02:27.934480 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r65qw"] Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.004257 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hff74"] Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.004594 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hff74" podUID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerName="registry-server" containerID="cri-o://9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874" gracePeriod=2 Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.448038 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hff74" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.476689 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbpls\" (UniqueName: \"kubernetes.io/projected/b0b227f5-fd98-48f5-8b0b-4d10096c407b-kube-api-access-gbpls\") pod \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.476750 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-catalog-content\") pod \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.476942 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-utilities\") pod \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\" (UID: \"b0b227f5-fd98-48f5-8b0b-4d10096c407b\") " Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.477501 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-utilities" (OuterVolumeSpecName: "utilities") pod "b0b227f5-fd98-48f5-8b0b-4d10096c407b" (UID: "b0b227f5-fd98-48f5-8b0b-4d10096c407b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.495794 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b227f5-fd98-48f5-8b0b-4d10096c407b-kube-api-access-gbpls" (OuterVolumeSpecName: "kube-api-access-gbpls") pod "b0b227f5-fd98-48f5-8b0b-4d10096c407b" (UID: "b0b227f5-fd98-48f5-8b0b-4d10096c407b"). InnerVolumeSpecName "kube-api-access-gbpls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.551695 4796 generic.go:334] "Generic (PLEG): container finished" podID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerID="9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874" exitCode=0 Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.552687 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hff74" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.553141 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hff74" event={"ID":"b0b227f5-fd98-48f5-8b0b-4d10096c407b","Type":"ContainerDied","Data":"9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874"} Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.553169 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hff74" event={"ID":"b0b227f5-fd98-48f5-8b0b-4d10096c407b","Type":"ContainerDied","Data":"71fe692513cd37a20e2e41841828f17799e48ae84f40ec1605deb7a9ac9a5abd"} Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.553185 4796 scope.go:117] "RemoveContainer" containerID="9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.562645 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0b227f5-fd98-48f5-8b0b-4d10096c407b" (UID: "b0b227f5-fd98-48f5-8b0b-4d10096c407b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.579379 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.579404 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbpls\" (UniqueName: \"kubernetes.io/projected/b0b227f5-fd98-48f5-8b0b-4d10096c407b-kube-api-access-gbpls\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.579413 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b227f5-fd98-48f5-8b0b-4d10096c407b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.586021 4796 scope.go:117] "RemoveContainer" containerID="bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.612997 4796 scope.go:117] "RemoveContainer" containerID="93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.649516 4796 scope.go:117] "RemoveContainer" containerID="9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874" Nov 25 15:02:28 crc kubenswrapper[4796]: E1125 15:02:28.650068 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874\": container with ID starting with 9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874 not found: ID does not exist" containerID="9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.650133 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874"} err="failed to get container status \"9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874\": rpc error: code = NotFound desc = could not find container \"9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874\": container with ID starting with 9aabfdf5ebf815325c1fb3ca7c9b013881f4b10e5783851778cc9377d035e874 not found: ID does not exist" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.650170 4796 scope.go:117] "RemoveContainer" containerID="bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c" Nov 25 15:02:28 crc kubenswrapper[4796]: E1125 15:02:28.650489 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c\": container with ID starting with bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c not found: ID does not exist" containerID="bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.650543 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c"} err="failed to get container status \"bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c\": rpc error: code = NotFound desc = could not find container \"bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c\": container with ID starting with bb501873c58676acf3214dcb45b7101bea9c1c35f534aae0f8e6ac900727aa8c not found: ID does not exist" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.650589 4796 scope.go:117] "RemoveContainer" containerID="93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740" Nov 25 15:02:28 crc kubenswrapper[4796]: E1125 15:02:28.650872 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740\": container with ID starting with 93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740 not found: ID does not exist" containerID="93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.650906 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740"} err="failed to get container status \"93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740\": rpc error: code = NotFound desc = could not find container \"93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740\": container with ID starting with 93cce13ab9f1d32fceaf6ca0fd55c42f8f4cf21c42fe9bfdedc6ad0dca2f2740 not found: ID does not exist" Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.883234 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hff74"] Nov 25 15:02:28 crc kubenswrapper[4796]: I1125 15:02:28.892974 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hff74"] Nov 25 15:02:30 crc kubenswrapper[4796]: I1125 15:02:30.425990 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" path="/var/lib/kubelet/pods/b0b227f5-fd98-48f5-8b0b-4d10096c407b/volumes" Nov 25 15:02:30 crc kubenswrapper[4796]: I1125 15:02:30.888030 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.206722 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tng8f"] Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.207651 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tng8f" podUID="52a93d86-ca08-461e-ace3-59fad719676a" containerName="registry-server" containerID="cri-o://cb921eed2408dcbd5d0407ce42ae453c7f78ef6e95f50a5b61738d10c6d09731" gracePeriod=2 Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.409933 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:02:33 crc kubenswrapper[4796]: E1125 15:02:33.410588 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.631123 4796 generic.go:334] "Generic (PLEG): container finished" podID="52a93d86-ca08-461e-ace3-59fad719676a" containerID="cb921eed2408dcbd5d0407ce42ae453c7f78ef6e95f50a5b61738d10c6d09731" exitCode=0 Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.631435 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tng8f" event={"ID":"52a93d86-ca08-461e-ace3-59fad719676a","Type":"ContainerDied","Data":"cb921eed2408dcbd5d0407ce42ae453c7f78ef6e95f50a5b61738d10c6d09731"} Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.709233 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.880768 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp8nt\" (UniqueName: \"kubernetes.io/projected/52a93d86-ca08-461e-ace3-59fad719676a-kube-api-access-tp8nt\") pod \"52a93d86-ca08-461e-ace3-59fad719676a\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.880871 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-utilities\") pod \"52a93d86-ca08-461e-ace3-59fad719676a\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.880961 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-catalog-content\") pod \"52a93d86-ca08-461e-ace3-59fad719676a\" (UID: \"52a93d86-ca08-461e-ace3-59fad719676a\") " Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.881755 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-utilities" (OuterVolumeSpecName: "utilities") pod "52a93d86-ca08-461e-ace3-59fad719676a" (UID: "52a93d86-ca08-461e-ace3-59fad719676a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.885867 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a93d86-ca08-461e-ace3-59fad719676a-kube-api-access-tp8nt" (OuterVolumeSpecName: "kube-api-access-tp8nt") pod "52a93d86-ca08-461e-ace3-59fad719676a" (UID: "52a93d86-ca08-461e-ace3-59fad719676a"). InnerVolumeSpecName "kube-api-access-tp8nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.919831 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52a93d86-ca08-461e-ace3-59fad719676a" (UID: "52a93d86-ca08-461e-ace3-59fad719676a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.983406 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp8nt\" (UniqueName: \"kubernetes.io/projected/52a93d86-ca08-461e-ace3-59fad719676a-kube-api-access-tp8nt\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.983440 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:33 crc kubenswrapper[4796]: I1125 15:02:33.983450 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a93d86-ca08-461e-ace3-59fad719676a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:34 crc kubenswrapper[4796]: I1125 15:02:34.647536 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tng8f" event={"ID":"52a93d86-ca08-461e-ace3-59fad719676a","Type":"ContainerDied","Data":"4c33f72bddcada5eceac1e1b326e3d916c34f39be8aa9cca04beff3b2223e66a"} Nov 25 15:02:34 crc kubenswrapper[4796]: I1125 15:02:34.647642 4796 scope.go:117] "RemoveContainer" containerID="cb921eed2408dcbd5d0407ce42ae453c7f78ef6e95f50a5b61738d10c6d09731" Nov 25 15:02:34 crc kubenswrapper[4796]: I1125 15:02:34.647677 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tng8f" Nov 25 15:02:34 crc kubenswrapper[4796]: I1125 15:02:34.684411 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tng8f"] Nov 25 15:02:34 crc kubenswrapper[4796]: I1125 15:02:34.687765 4796 scope.go:117] "RemoveContainer" containerID="d7f2a40b7cbed71440c1a6a9ce18e20e24f0886ae4a28b3959cb0dfd42a8dd03" Nov 25 15:02:34 crc kubenswrapper[4796]: I1125 15:02:34.696248 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tng8f"] Nov 25 15:02:34 crc kubenswrapper[4796]: I1125 15:02:34.710795 4796 scope.go:117] "RemoveContainer" containerID="155737d8804873566d39ee455122fe259409e701fb5b107dc133d651ed70df23" Nov 25 15:02:36 crc kubenswrapper[4796]: I1125 15:02:36.430418 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a93d86-ca08-461e-ace3-59fad719676a" path="/var/lib/kubelet/pods/52a93d86-ca08-461e-ace3-59fad719676a/volumes" Nov 25 15:02:40 crc kubenswrapper[4796]: I1125 15:02:40.706439 4796 generic.go:334] "Generic (PLEG): container finished" podID="3a001f8e-537d-4c17-88cd-b1c2a8727074" containerID="1d2cae56f6faf26f2297ae5bb1f4f533e955bee0643563173c379d313020502f" exitCode=0 Nov 25 15:02:40 crc kubenswrapper[4796]: I1125 15:02:40.707083 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" event={"ID":"3a001f8e-537d-4c17-88cd-b1c2a8727074","Type":"ContainerDied","Data":"1d2cae56f6faf26f2297ae5bb1f4f533e955bee0643563173c379d313020502f"} Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.232425 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.348053 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-ssh-key\") pod \"3a001f8e-537d-4c17-88cd-b1c2a8727074\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.348144 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-metadata-combined-ca-bundle\") pod \"3a001f8e-537d-4c17-88cd-b1c2a8727074\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.348182 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-inventory\") pod \"3a001f8e-537d-4c17-88cd-b1c2a8727074\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.348240 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3a001f8e-537d-4c17-88cd-b1c2a8727074\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.348319 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sngvk\" (UniqueName: \"kubernetes.io/projected/3a001f8e-537d-4c17-88cd-b1c2a8727074-kube-api-access-sngvk\") pod \"3a001f8e-537d-4c17-88cd-b1c2a8727074\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.348404 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-nova-metadata-neutron-config-0\") pod \"3a001f8e-537d-4c17-88cd-b1c2a8727074\" (UID: \"3a001f8e-537d-4c17-88cd-b1c2a8727074\") " Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.355865 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3a001f8e-537d-4c17-88cd-b1c2a8727074" (UID: "3a001f8e-537d-4c17-88cd-b1c2a8727074"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.357344 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a001f8e-537d-4c17-88cd-b1c2a8727074-kube-api-access-sngvk" (OuterVolumeSpecName: "kube-api-access-sngvk") pod "3a001f8e-537d-4c17-88cd-b1c2a8727074" (UID: "3a001f8e-537d-4c17-88cd-b1c2a8727074"). InnerVolumeSpecName "kube-api-access-sngvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.377954 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3a001f8e-537d-4c17-88cd-b1c2a8727074" (UID: "3a001f8e-537d-4c17-88cd-b1c2a8727074"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.378376 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3a001f8e-537d-4c17-88cd-b1c2a8727074" (UID: "3a001f8e-537d-4c17-88cd-b1c2a8727074"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.384743 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-inventory" (OuterVolumeSpecName: "inventory") pod "3a001f8e-537d-4c17-88cd-b1c2a8727074" (UID: "3a001f8e-537d-4c17-88cd-b1c2a8727074"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.386414 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3a001f8e-537d-4c17-88cd-b1c2a8727074" (UID: "3a001f8e-537d-4c17-88cd-b1c2a8727074"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.450419 4796 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.450820 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sngvk\" (UniqueName: \"kubernetes.io/projected/3a001f8e-537d-4c17-88cd-b1c2a8727074-kube-api-access-sngvk\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.450841 4796 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.450851 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.450860 4796 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.450869 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a001f8e-537d-4c17-88cd-b1c2a8727074-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.728438 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" event={"ID":"3a001f8e-537d-4c17-88cd-b1c2a8727074","Type":"ContainerDied","Data":"c349ada872e0949543c06323ea452d0cacbe33e0b283d8f70a0c316c0c0db7a0"} Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.728474 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c349ada872e0949543c06323ea452d0cacbe33e0b283d8f70a0c316c0c0db7a0" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.728513 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847118 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb"] Nov 25 15:02:42 crc kubenswrapper[4796]: E1125 15:02:42.847604 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a93d86-ca08-461e-ace3-59fad719676a" containerName="extract-content" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847625 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a93d86-ca08-461e-ace3-59fad719676a" containerName="extract-content" Nov 25 15:02:42 crc kubenswrapper[4796]: E1125 15:02:42.847636 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerName="registry-server" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847643 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerName="registry-server" Nov 25 15:02:42 crc kubenswrapper[4796]: E1125 15:02:42.847663 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a93d86-ca08-461e-ace3-59fad719676a" containerName="extract-utilities" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847668 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a93d86-ca08-461e-ace3-59fad719676a" containerName="extract-utilities" Nov 25 15:02:42 crc kubenswrapper[4796]: E1125 15:02:42.847679 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerName="extract-utilities" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847684 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerName="extract-utilities" Nov 25 15:02:42 crc kubenswrapper[4796]: E1125 15:02:42.847699 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a93d86-ca08-461e-ace3-59fad719676a" containerName="registry-server" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847705 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a93d86-ca08-461e-ace3-59fad719676a" containerName="registry-server" Nov 25 15:02:42 crc kubenswrapper[4796]: E1125 15:02:42.847720 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a001f8e-537d-4c17-88cd-b1c2a8727074" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847728 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a001f8e-537d-4c17-88cd-b1c2a8727074" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 15:02:42 crc kubenswrapper[4796]: E1125 15:02:42.847745 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerName="extract-content" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847750 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerName="extract-content" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847952 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a93d86-ca08-461e-ace3-59fad719676a" containerName="registry-server" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847969 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b227f5-fd98-48f5-8b0b-4d10096c407b" containerName="registry-server" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.847985 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a001f8e-537d-4c17-88cd-b1c2a8727074" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.848591 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.850440 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.851265 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.851281 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.851283 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.854741 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.874903 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb"] Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.960762 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.960826 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmxz\" (UniqueName: \"kubernetes.io/projected/5e5ea533-89ca-434d-bde5-0222fa319b66-kube-api-access-wzmxz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.960863 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.960977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:42 crc kubenswrapper[4796]: I1125 15:02:42.961045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.063670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.063899 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.063951 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.064316 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.065071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmxz\" (UniqueName: \"kubernetes.io/projected/5e5ea533-89ca-434d-bde5-0222fa319b66-kube-api-access-wzmxz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.070672 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.071080 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.075229 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.084693 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.094549 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmxz\" (UniqueName: \"kubernetes.io/projected/5e5ea533-89ca-434d-bde5-0222fa319b66-kube-api-access-wzmxz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.173836 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:02:43 crc kubenswrapper[4796]: I1125 15:02:43.786325 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb"] Nov 25 15:02:44 crc kubenswrapper[4796]: I1125 15:02:44.745419 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" event={"ID":"5e5ea533-89ca-434d-bde5-0222fa319b66","Type":"ContainerStarted","Data":"b18f4fbfb495ed5d51938553edd09d5acb689fc060c63a1e1e11b8a465f357f1"} Nov 25 15:02:44 crc kubenswrapper[4796]: I1125 15:02:44.745790 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" event={"ID":"5e5ea533-89ca-434d-bde5-0222fa319b66","Type":"ContainerStarted","Data":"261d3c9045a448b6ec69887d69622bc79f8de8a06153cff943985cad88f474ec"} Nov 25 15:02:44 crc kubenswrapper[4796]: I1125 15:02:44.766760 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" podStartSLOduration=2.125829481 podStartE2EDuration="2.76674348s" podCreationTimestamp="2025-11-25 15:02:42 +0000 UTC" firstStartedPulling="2025-11-25 15:02:43.795281675 +0000 UTC m=+2292.138391099" lastFinishedPulling="2025-11-25 15:02:44.436195654 +0000 UTC m=+2292.779305098" observedRunningTime="2025-11-25 15:02:44.761304459 +0000 UTC m=+2293.104413883" watchObservedRunningTime="2025-11-25 15:02:44.76674348 +0000 UTC m=+2293.109852904" Nov 25 15:02:46 crc kubenswrapper[4796]: I1125 15:02:46.410008 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:02:46 crc kubenswrapper[4796]: E1125 15:02:46.410613 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:02:59 crc kubenswrapper[4796]: I1125 15:02:59.409011 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:02:59 crc kubenswrapper[4796]: E1125 15:02:59.409686 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:03:12 crc kubenswrapper[4796]: I1125 15:03:12.418282 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:03:12 crc kubenswrapper[4796]: E1125 15:03:12.420745 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:03:25 crc kubenswrapper[4796]: I1125 15:03:25.409392 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:03:25 crc kubenswrapper[4796]: E1125 15:03:25.410123 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:03:36 crc kubenswrapper[4796]: I1125 15:03:36.409896 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:03:36 crc kubenswrapper[4796]: E1125 15:03:36.410837 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:03:49 crc kubenswrapper[4796]: I1125 15:03:49.409553 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:03:49 crc kubenswrapper[4796]: E1125 15:03:49.410594 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:04:01 crc kubenswrapper[4796]: I1125 15:04:01.409551 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:04:01 crc kubenswrapper[4796]: E1125 15:04:01.410665 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:04:12 crc kubenswrapper[4796]: I1125 15:04:12.417013 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:04:12 crc kubenswrapper[4796]: E1125 15:04:12.418430 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:04:25 crc kubenswrapper[4796]: I1125 15:04:25.409642 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:04:25 crc kubenswrapper[4796]: E1125 15:04:25.410381 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:04:38 crc kubenswrapper[4796]: I1125 15:04:38.409449 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:04:38 crc kubenswrapper[4796]: E1125 15:04:38.410855 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:04:50 crc kubenswrapper[4796]: I1125 15:04:50.409743 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:04:50 crc kubenswrapper[4796]: E1125 15:04:50.410694 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:05:03 crc kubenswrapper[4796]: I1125 15:05:03.410643 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:05:03 crc kubenswrapper[4796]: E1125 15:05:03.411657 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:05:17 crc kubenswrapper[4796]: I1125 15:05:17.410134 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:05:17 crc kubenswrapper[4796]: E1125 15:05:17.411048 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:05:29 crc kubenswrapper[4796]: I1125 15:05:29.409214 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:05:29 crc kubenswrapper[4796]: E1125 15:05:29.410016 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:05:44 crc kubenswrapper[4796]: I1125 15:05:44.410031 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:05:44 crc kubenswrapper[4796]: E1125 15:05:44.411811 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:05:56 crc kubenswrapper[4796]: I1125 15:05:56.409756 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:05:56 crc kubenswrapper[4796]: E1125 15:05:56.410513 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:06:11 crc kubenswrapper[4796]: I1125 15:06:11.409751 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:06:11 crc kubenswrapper[4796]: E1125 15:06:11.410674 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:06:22 crc kubenswrapper[4796]: I1125 15:06:22.415709 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:06:22 crc kubenswrapper[4796]: E1125 15:06:22.416547 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:06:36 crc kubenswrapper[4796]: I1125 15:06:36.410304 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:06:36 crc kubenswrapper[4796]: E1125 15:06:36.411039 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:06:48 crc kubenswrapper[4796]: I1125 15:06:48.409867 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:06:48 crc kubenswrapper[4796]: E1125 15:06:48.411165 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:06:51 crc kubenswrapper[4796]: I1125 15:06:51.304332 4796 generic.go:334] "Generic (PLEG): container finished" podID="5e5ea533-89ca-434d-bde5-0222fa319b66" containerID="b18f4fbfb495ed5d51938553edd09d5acb689fc060c63a1e1e11b8a465f357f1" exitCode=0 Nov 25 15:06:51 crc kubenswrapper[4796]: I1125 15:06:51.304462 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" event={"ID":"5e5ea533-89ca-434d-bde5-0222fa319b66","Type":"ContainerDied","Data":"b18f4fbfb495ed5d51938553edd09d5acb689fc060c63a1e1e11b8a465f357f1"} Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.759638 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.909922 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-inventory\") pod \"5e5ea533-89ca-434d-bde5-0222fa319b66\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.910284 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-combined-ca-bundle\") pod \"5e5ea533-89ca-434d-bde5-0222fa319b66\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.910414 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzmxz\" (UniqueName: \"kubernetes.io/projected/5e5ea533-89ca-434d-bde5-0222fa319b66-kube-api-access-wzmxz\") pod \"5e5ea533-89ca-434d-bde5-0222fa319b66\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.910484 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-ssh-key\") pod \"5e5ea533-89ca-434d-bde5-0222fa319b66\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.910507 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-secret-0\") pod \"5e5ea533-89ca-434d-bde5-0222fa319b66\" (UID: \"5e5ea533-89ca-434d-bde5-0222fa319b66\") " Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.916060 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e5ea533-89ca-434d-bde5-0222fa319b66-kube-api-access-wzmxz" (OuterVolumeSpecName: "kube-api-access-wzmxz") pod "5e5ea533-89ca-434d-bde5-0222fa319b66" (UID: "5e5ea533-89ca-434d-bde5-0222fa319b66"). InnerVolumeSpecName "kube-api-access-wzmxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.916134 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5e5ea533-89ca-434d-bde5-0222fa319b66" (UID: "5e5ea533-89ca-434d-bde5-0222fa319b66"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.938555 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-inventory" (OuterVolumeSpecName: "inventory") pod "5e5ea533-89ca-434d-bde5-0222fa319b66" (UID: "5e5ea533-89ca-434d-bde5-0222fa319b66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.949885 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "5e5ea533-89ca-434d-bde5-0222fa319b66" (UID: "5e5ea533-89ca-434d-bde5-0222fa319b66"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:06:52 crc kubenswrapper[4796]: I1125 15:06:52.966343 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5e5ea533-89ca-434d-bde5-0222fa319b66" (UID: "5e5ea533-89ca-434d-bde5-0222fa319b66"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.012371 4796 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.012402 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzmxz\" (UniqueName: \"kubernetes.io/projected/5e5ea533-89ca-434d-bde5-0222fa319b66-kube-api-access-wzmxz\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.012412 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.012422 4796 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.012430 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e5ea533-89ca-434d-bde5-0222fa319b66-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.326676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" event={"ID":"5e5ea533-89ca-434d-bde5-0222fa319b66","Type":"ContainerDied","Data":"261d3c9045a448b6ec69887d69622bc79f8de8a06153cff943985cad88f474ec"} Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.326742 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="261d3c9045a448b6ec69887d69622bc79f8de8a06153cff943985cad88f474ec" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.326786 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.468404 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn"] Nov 25 15:06:53 crc kubenswrapper[4796]: E1125 15:06:53.468889 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5ea533-89ca-434d-bde5-0222fa319b66" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.468914 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5ea533-89ca-434d-bde5-0222fa319b66" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.469100 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e5ea533-89ca-434d-bde5-0222fa319b66" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.469807 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.475964 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.476073 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.493923 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.494221 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.494351 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.494527 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.494871 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.494968 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn"] Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.631468 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.631515 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.631688 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.631758 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.631890 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.631928 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.631965 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmq59\" (UniqueName: \"kubernetes.io/projected/8c595aba-53f4-47cf-9b97-c489fb013f6e-kube-api-access-zmq59\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.631996 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.632034 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.733715 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.734057 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.734098 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.734138 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmq59\" (UniqueName: \"kubernetes.io/projected/8c595aba-53f4-47cf-9b97-c489fb013f6e-kube-api-access-zmq59\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.734170 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.734206 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.734269 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.734292 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.734376 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.735062 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.738611 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.739097 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.739331 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.740556 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.740605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.746027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.747062 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.759510 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmq59\" (UniqueName: \"kubernetes.io/projected/8c595aba-53f4-47cf-9b97-c489fb013f6e-kube-api-access-zmq59\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5l2zn\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:53 crc kubenswrapper[4796]: I1125 15:06:53.803153 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:06:54 crc kubenswrapper[4796]: I1125 15:06:54.328212 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn"] Nov 25 15:06:54 crc kubenswrapper[4796]: I1125 15:06:54.334417 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:06:55 crc kubenswrapper[4796]: I1125 15:06:55.345923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" event={"ID":"8c595aba-53f4-47cf-9b97-c489fb013f6e","Type":"ContainerStarted","Data":"1790e5cf8b2dd1cac01ff4890490314a324032d19e8276f982c86cfc5419443f"} Nov 25 15:06:55 crc kubenswrapper[4796]: I1125 15:06:55.345965 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" event={"ID":"8c595aba-53f4-47cf-9b97-c489fb013f6e","Type":"ContainerStarted","Data":"cd5bc2187a632393f0c85e7ca3cb32bc35035c2a2e167cd975214b7685fbd9bd"} Nov 25 15:06:55 crc kubenswrapper[4796]: I1125 15:06:55.394938 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" podStartSLOduration=1.985440791 podStartE2EDuration="2.394909394s" podCreationTimestamp="2025-11-25 15:06:53 +0000 UTC" firstStartedPulling="2025-11-25 15:06:54.334161817 +0000 UTC m=+2542.677271241" lastFinishedPulling="2025-11-25 15:06:54.74363041 +0000 UTC m=+2543.086739844" observedRunningTime="2025-11-25 15:06:55.362132599 +0000 UTC m=+2543.705242093" watchObservedRunningTime="2025-11-25 15:06:55.394909394 +0000 UTC m=+2543.738018838" Nov 25 15:07:02 crc kubenswrapper[4796]: I1125 15:07:02.421717 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:07:02 crc kubenswrapper[4796]: E1125 15:07:02.423115 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:07:15 crc kubenswrapper[4796]: I1125 15:07:15.409341 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:07:15 crc kubenswrapper[4796]: E1125 15:07:15.410082 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:07:29 crc kubenswrapper[4796]: I1125 15:07:29.409895 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:07:30 crc kubenswrapper[4796]: I1125 15:07:30.678849 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"90cd350136ae8324b154646518487385fea8d658259347ebba5e0c7e669bf9e5"} Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.484857 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f5xps"] Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.489163 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.495479 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f5xps"] Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.615353 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b44682b-4eeb-434a-a769-94289e240d6e-catalog-content\") pod \"redhat-operators-f5xps\" (UID: \"5b44682b-4eeb-434a-a769-94289e240d6e\") " pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.615423 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b44682b-4eeb-434a-a769-94289e240d6e-utilities\") pod \"redhat-operators-f5xps\" (UID: \"5b44682b-4eeb-434a-a769-94289e240d6e\") " pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.615488 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk96s\" (UniqueName: \"kubernetes.io/projected/5b44682b-4eeb-434a-a769-94289e240d6e-kube-api-access-pk96s\") pod \"redhat-operators-f5xps\" (UID: \"5b44682b-4eeb-434a-a769-94289e240d6e\") " pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.717549 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b44682b-4eeb-434a-a769-94289e240d6e-catalog-content\") pod \"redhat-operators-f5xps\" (UID: \"5b44682b-4eeb-434a-a769-94289e240d6e\") " pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.717647 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b44682b-4eeb-434a-a769-94289e240d6e-utilities\") pod \"redhat-operators-f5xps\" (UID: \"5b44682b-4eeb-434a-a769-94289e240d6e\") " pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.717732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk96s\" (UniqueName: \"kubernetes.io/projected/5b44682b-4eeb-434a-a769-94289e240d6e-kube-api-access-pk96s\") pod \"redhat-operators-f5xps\" (UID: \"5b44682b-4eeb-434a-a769-94289e240d6e\") " pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.718130 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b44682b-4eeb-434a-a769-94289e240d6e-catalog-content\") pod \"redhat-operators-f5xps\" (UID: \"5b44682b-4eeb-434a-a769-94289e240d6e\") " pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.718240 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b44682b-4eeb-434a-a769-94289e240d6e-utilities\") pod \"redhat-operators-f5xps\" (UID: \"5b44682b-4eeb-434a-a769-94289e240d6e\") " pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.742651 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk96s\" (UniqueName: \"kubernetes.io/projected/5b44682b-4eeb-434a-a769-94289e240d6e-kube-api-access-pk96s\") pod \"redhat-operators-f5xps\" (UID: \"5b44682b-4eeb-434a-a769-94289e240d6e\") " pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:36 crc kubenswrapper[4796]: I1125 15:09:36.820705 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:37 crc kubenswrapper[4796]: I1125 15:09:37.323722 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f5xps"] Nov 25 15:09:37 crc kubenswrapper[4796]: I1125 15:09:37.963629 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" event={"ID":"8c595aba-53f4-47cf-9b97-c489fb013f6e","Type":"ContainerDied","Data":"1790e5cf8b2dd1cac01ff4890490314a324032d19e8276f982c86cfc5419443f"} Nov 25 15:09:37 crc kubenswrapper[4796]: I1125 15:09:37.963440 4796 generic.go:334] "Generic (PLEG): container finished" podID="8c595aba-53f4-47cf-9b97-c489fb013f6e" containerID="1790e5cf8b2dd1cac01ff4890490314a324032d19e8276f982c86cfc5419443f" exitCode=0 Nov 25 15:09:37 crc kubenswrapper[4796]: I1125 15:09:37.967216 4796 generic.go:334] "Generic (PLEG): container finished" podID="5b44682b-4eeb-434a-a769-94289e240d6e" containerID="39e502bf62131567dabdc5c2165d500202b342d77f86a5f5e0ec8651702ee294" exitCode=0 Nov 25 15:09:37 crc kubenswrapper[4796]: I1125 15:09:37.967309 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5xps" event={"ID":"5b44682b-4eeb-434a-a769-94289e240d6e","Type":"ContainerDied","Data":"39e502bf62131567dabdc5c2165d500202b342d77f86a5f5e0ec8651702ee294"} Nov 25 15:09:37 crc kubenswrapper[4796]: I1125 15:09:37.967364 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5xps" event={"ID":"5b44682b-4eeb-434a-a769-94289e240d6e","Type":"ContainerStarted","Data":"8c92c94773ffd540ee35cd91615b0e6d1d9ae3c42fe18d636721e8b574ec4cca"} Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.403344 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.482480 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-1\") pod \"8c595aba-53f4-47cf-9b97-c489fb013f6e\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.482534 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-inventory\") pod \"8c595aba-53f4-47cf-9b97-c489fb013f6e\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.482552 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-ssh-key\") pod \"8c595aba-53f4-47cf-9b97-c489fb013f6e\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.482721 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-0\") pod \"8c595aba-53f4-47cf-9b97-c489fb013f6e\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.482742 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-0\") pod \"8c595aba-53f4-47cf-9b97-c489fb013f6e\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.482758 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-1\") pod \"8c595aba-53f4-47cf-9b97-c489fb013f6e\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.482792 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmq59\" (UniqueName: \"kubernetes.io/projected/8c595aba-53f4-47cf-9b97-c489fb013f6e-kube-api-access-zmq59\") pod \"8c595aba-53f4-47cf-9b97-c489fb013f6e\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.482835 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-extra-config-0\") pod \"8c595aba-53f4-47cf-9b97-c489fb013f6e\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.482885 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-combined-ca-bundle\") pod \"8c595aba-53f4-47cf-9b97-c489fb013f6e\" (UID: \"8c595aba-53f4-47cf-9b97-c489fb013f6e\") " Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.489311 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c595aba-53f4-47cf-9b97-c489fb013f6e-kube-api-access-zmq59" (OuterVolumeSpecName: "kube-api-access-zmq59") pod "8c595aba-53f4-47cf-9b97-c489fb013f6e" (UID: "8c595aba-53f4-47cf-9b97-c489fb013f6e"). InnerVolumeSpecName "kube-api-access-zmq59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.503825 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8c595aba-53f4-47cf-9b97-c489fb013f6e" (UID: "8c595aba-53f4-47cf-9b97-c489fb013f6e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.512140 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-inventory" (OuterVolumeSpecName: "inventory") pod "8c595aba-53f4-47cf-9b97-c489fb013f6e" (UID: "8c595aba-53f4-47cf-9b97-c489fb013f6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.513367 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8c595aba-53f4-47cf-9b97-c489fb013f6e" (UID: "8c595aba-53f4-47cf-9b97-c489fb013f6e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.519779 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8c595aba-53f4-47cf-9b97-c489fb013f6e" (UID: "8c595aba-53f4-47cf-9b97-c489fb013f6e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.522406 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8c595aba-53f4-47cf-9b97-c489fb013f6e" (UID: "8c595aba-53f4-47cf-9b97-c489fb013f6e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.532125 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8c595aba-53f4-47cf-9b97-c489fb013f6e" (UID: "8c595aba-53f4-47cf-9b97-c489fb013f6e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.533091 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8c595aba-53f4-47cf-9b97-c489fb013f6e" (UID: "8c595aba-53f4-47cf-9b97-c489fb013f6e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.548479 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8c595aba-53f4-47cf-9b97-c489fb013f6e" (UID: "8c595aba-53f4-47cf-9b97-c489fb013f6e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.584963 4796 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.585000 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.585013 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.585025 4796 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.585040 4796 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.585052 4796 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.585065 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmq59\" (UniqueName: \"kubernetes.io/projected/8c595aba-53f4-47cf-9b97-c489fb013f6e-kube-api-access-zmq59\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.585076 4796 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.585088 4796 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c595aba-53f4-47cf-9b97-c489fb013f6e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.992335 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" event={"ID":"8c595aba-53f4-47cf-9b97-c489fb013f6e","Type":"ContainerDied","Data":"cd5bc2187a632393f0c85e7ca3cb32bc35035c2a2e167cd975214b7685fbd9bd"} Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.992375 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd5bc2187a632393f0c85e7ca3cb32bc35035c2a2e167cd975214b7685fbd9bd" Nov 25 15:09:39 crc kubenswrapper[4796]: I1125 15:09:39.992429 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5l2zn" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.095923 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788"] Nov 25 15:09:40 crc kubenswrapper[4796]: E1125 15:09:40.096514 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c595aba-53f4-47cf-9b97-c489fb013f6e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.096536 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c595aba-53f4-47cf-9b97-c489fb013f6e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.096748 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c595aba-53f4-47cf-9b97-c489fb013f6e" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.097407 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.100491 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.100757 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.100915 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.104356 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.104837 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n2hfx" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.117430 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788"] Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.196070 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.196136 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.196161 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.196205 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.196927 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkjrg\" (UniqueName: \"kubernetes.io/projected/885ec954-19ea-488f-badc-9dc879859a45-kube-api-access-vkjrg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.197287 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.197548 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.300610 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.300771 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.300832 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.301004 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.301089 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.301141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkjrg\" (UniqueName: \"kubernetes.io/projected/885ec954-19ea-488f-badc-9dc879859a45-kube-api-access-vkjrg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.301225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.306619 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.306738 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.307869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.308887 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.309594 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.316716 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.323396 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkjrg\" (UniqueName: \"kubernetes.io/projected/885ec954-19ea-488f-badc-9dc879859a45-kube-api-access-vkjrg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-99788\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.430035 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:09:40 crc kubenswrapper[4796]: W1125 15:09:40.960314 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod885ec954_19ea_488f_badc_9dc879859a45.slice/crio-7588504d4ae4358ef7c027ba830726a4799ab022af2ae908c032372a6de995d6 WatchSource:0}: Error finding container 7588504d4ae4358ef7c027ba830726a4799ab022af2ae908c032372a6de995d6: Status 404 returned error can't find the container with id 7588504d4ae4358ef7c027ba830726a4799ab022af2ae908c032372a6de995d6 Nov 25 15:09:40 crc kubenswrapper[4796]: I1125 15:09:40.990727 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788"] Nov 25 15:09:41 crc kubenswrapper[4796]: I1125 15:09:41.001063 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" event={"ID":"885ec954-19ea-488f-badc-9dc879859a45","Type":"ContainerStarted","Data":"7588504d4ae4358ef7c027ba830726a4799ab022af2ae908c032372a6de995d6"} Nov 25 15:09:42 crc kubenswrapper[4796]: I1125 15:09:42.012745 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" event={"ID":"885ec954-19ea-488f-badc-9dc879859a45","Type":"ContainerStarted","Data":"f7d262f7e9ec28ba3c7181c3936e9b58a7fc2098702e959b7299bf661f66c7c4"} Nov 25 15:09:42 crc kubenswrapper[4796]: I1125 15:09:42.036064 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" podStartSLOduration=1.640512929 podStartE2EDuration="2.036046895s" podCreationTimestamp="2025-11-25 15:09:40 +0000 UTC" firstStartedPulling="2025-11-25 15:09:40.966308518 +0000 UTC m=+2709.309417942" lastFinishedPulling="2025-11-25 15:09:41.361842464 +0000 UTC m=+2709.704951908" observedRunningTime="2025-11-25 15:09:42.031731841 +0000 UTC m=+2710.374841285" watchObservedRunningTime="2025-11-25 15:09:42.036046895 +0000 UTC m=+2710.379156319" Nov 25 15:09:49 crc kubenswrapper[4796]: I1125 15:09:49.077014 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5xps" event={"ID":"5b44682b-4eeb-434a-a769-94289e240d6e","Type":"ContainerStarted","Data":"a8c0db1b02341d431f14475be9b95659dd23fb4636b9f70d8bea3e667c6de20f"} Nov 25 15:09:49 crc kubenswrapper[4796]: I1125 15:09:49.514352 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:09:49 crc kubenswrapper[4796]: I1125 15:09:49.514409 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:09:50 crc kubenswrapper[4796]: I1125 15:09:50.088425 4796 generic.go:334] "Generic (PLEG): container finished" podID="5b44682b-4eeb-434a-a769-94289e240d6e" containerID="a8c0db1b02341d431f14475be9b95659dd23fb4636b9f70d8bea3e667c6de20f" exitCode=0 Nov 25 15:09:50 crc kubenswrapper[4796]: I1125 15:09:50.088477 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5xps" event={"ID":"5b44682b-4eeb-434a-a769-94289e240d6e","Type":"ContainerDied","Data":"a8c0db1b02341d431f14475be9b95659dd23fb4636b9f70d8bea3e667c6de20f"} Nov 25 15:09:51 crc kubenswrapper[4796]: I1125 15:09:51.101616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5xps" event={"ID":"5b44682b-4eeb-434a-a769-94289e240d6e","Type":"ContainerStarted","Data":"5b17c5369711b2cf4b5775c28de418bcf4401786705da6b6b96e17f825f491a1"} Nov 25 15:09:51 crc kubenswrapper[4796]: I1125 15:09:51.122885 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f5xps" podStartSLOduration=2.574762409 podStartE2EDuration="15.122867791s" podCreationTimestamp="2025-11-25 15:09:36 +0000 UTC" firstStartedPulling="2025-11-25 15:09:37.969550635 +0000 UTC m=+2706.312660069" lastFinishedPulling="2025-11-25 15:09:50.517656027 +0000 UTC m=+2718.860765451" observedRunningTime="2025-11-25 15:09:51.120378323 +0000 UTC m=+2719.463487767" watchObservedRunningTime="2025-11-25 15:09:51.122867791 +0000 UTC m=+2719.465977215" Nov 25 15:09:56 crc kubenswrapper[4796]: I1125 15:09:56.821490 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:56 crc kubenswrapper[4796]: I1125 15:09:56.823096 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:56 crc kubenswrapper[4796]: I1125 15:09:56.875303 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:57 crc kubenswrapper[4796]: I1125 15:09:57.194949 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f5xps" Nov 25 15:09:57 crc kubenswrapper[4796]: I1125 15:09:57.263122 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f5xps"] Nov 25 15:09:57 crc kubenswrapper[4796]: I1125 15:09:57.308110 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g98hr"] Nov 25 15:09:57 crc kubenswrapper[4796]: I1125 15:09:57.308400 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g98hr" podUID="1fc0642f-5868-4241-a027-a9cd7e401962" containerName="registry-server" containerID="cri-o://749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9" gracePeriod=2 Nov 25 15:09:57 crc kubenswrapper[4796]: E1125 15:09:57.535030 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fc0642f_5868_4241_a027_a9cd7e401962.slice/crio-conmon-749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fc0642f_5868_4241_a027_a9cd7e401962.slice/crio-749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:09:57 crc kubenswrapper[4796]: I1125 15:09:57.862644 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.054756 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qvbs\" (UniqueName: \"kubernetes.io/projected/1fc0642f-5868-4241-a027-a9cd7e401962-kube-api-access-9qvbs\") pod \"1fc0642f-5868-4241-a027-a9cd7e401962\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.055142 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-catalog-content\") pod \"1fc0642f-5868-4241-a027-a9cd7e401962\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.055163 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-utilities\") pod \"1fc0642f-5868-4241-a027-a9cd7e401962\" (UID: \"1fc0642f-5868-4241-a027-a9cd7e401962\") " Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.056071 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-utilities" (OuterVolumeSpecName: "utilities") pod "1fc0642f-5868-4241-a027-a9cd7e401962" (UID: "1fc0642f-5868-4241-a027-a9cd7e401962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.061822 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc0642f-5868-4241-a027-a9cd7e401962-kube-api-access-9qvbs" (OuterVolumeSpecName: "kube-api-access-9qvbs") pod "1fc0642f-5868-4241-a027-a9cd7e401962" (UID: "1fc0642f-5868-4241-a027-a9cd7e401962"). InnerVolumeSpecName "kube-api-access-9qvbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.138047 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fc0642f-5868-4241-a027-a9cd7e401962" (UID: "1fc0642f-5868-4241-a027-a9cd7e401962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.157811 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qvbs\" (UniqueName: \"kubernetes.io/projected/1fc0642f-5868-4241-a027-a9cd7e401962-kube-api-access-9qvbs\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.157845 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.157856 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc0642f-5868-4241-a027-a9cd7e401962-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.162778 4796 generic.go:334] "Generic (PLEG): container finished" podID="1fc0642f-5868-4241-a027-a9cd7e401962" containerID="749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9" exitCode=0 Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.162846 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g98hr" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.162875 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g98hr" event={"ID":"1fc0642f-5868-4241-a027-a9cd7e401962","Type":"ContainerDied","Data":"749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9"} Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.163149 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g98hr" event={"ID":"1fc0642f-5868-4241-a027-a9cd7e401962","Type":"ContainerDied","Data":"7fde644b491c90d0ce22f9e77e25fb885fd9ea23ba4e1632703d673df8af5c8b"} Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.163166 4796 scope.go:117] "RemoveContainer" containerID="749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.193644 4796 scope.go:117] "RemoveContainer" containerID="cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.203485 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g98hr"] Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.214435 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g98hr"] Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.227154 4796 scope.go:117] "RemoveContainer" containerID="7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.259476 4796 scope.go:117] "RemoveContainer" containerID="749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9" Nov 25 15:09:58 crc kubenswrapper[4796]: E1125 15:09:58.260059 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9\": container with ID starting with 749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9 not found: ID does not exist" containerID="749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.260096 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9"} err="failed to get container status \"749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9\": rpc error: code = NotFound desc = could not find container \"749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9\": container with ID starting with 749b8274a14d7b04fe90026da8e54928bf7570c164b1d100b820dac76783b1c9 not found: ID does not exist" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.260122 4796 scope.go:117] "RemoveContainer" containerID="cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8" Nov 25 15:09:58 crc kubenswrapper[4796]: E1125 15:09:58.260415 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8\": container with ID starting with cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8 not found: ID does not exist" containerID="cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.260520 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8"} err="failed to get container status \"cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8\": rpc error: code = NotFound desc = could not find container \"cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8\": container with ID starting with cf545c9c2194db384c4624a19c5ef565b46227c0c20ea61a31371e8ba343ceb8 not found: ID does not exist" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.260695 4796 scope.go:117] "RemoveContainer" containerID="7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3" Nov 25 15:09:58 crc kubenswrapper[4796]: E1125 15:09:58.261117 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3\": container with ID starting with 7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3 not found: ID does not exist" containerID="7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.261137 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3"} err="failed to get container status \"7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3\": rpc error: code = NotFound desc = could not find container \"7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3\": container with ID starting with 7ad910d5b08ad72a0395a661996799621e0e3f0c3cf6831fe364bcf3d3f35ec3 not found: ID does not exist" Nov 25 15:09:58 crc kubenswrapper[4796]: I1125 15:09:58.419402 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fc0642f-5868-4241-a027-a9cd7e401962" path="/var/lib/kubelet/pods/1fc0642f-5868-4241-a027-a9cd7e401962/volumes" Nov 25 15:10:19 crc kubenswrapper[4796]: I1125 15:10:19.514274 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:10:19 crc kubenswrapper[4796]: I1125 15:10:19.514816 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:10:49 crc kubenswrapper[4796]: I1125 15:10:49.514037 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:10:49 crc kubenswrapper[4796]: I1125 15:10:49.514601 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:10:49 crc kubenswrapper[4796]: I1125 15:10:49.514648 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 15:10:49 crc kubenswrapper[4796]: I1125 15:10:49.515407 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90cd350136ae8324b154646518487385fea8d658259347ebba5e0c7e669bf9e5"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:10:49 crc kubenswrapper[4796]: I1125 15:10:49.515475 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://90cd350136ae8324b154646518487385fea8d658259347ebba5e0c7e669bf9e5" gracePeriod=600 Nov 25 15:10:49 crc kubenswrapper[4796]: I1125 15:10:49.649609 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="90cd350136ae8324b154646518487385fea8d658259347ebba5e0c7e669bf9e5" exitCode=0 Nov 25 15:10:49 crc kubenswrapper[4796]: I1125 15:10:49.649688 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"90cd350136ae8324b154646518487385fea8d658259347ebba5e0c7e669bf9e5"} Nov 25 15:10:49 crc kubenswrapper[4796]: I1125 15:10:49.649949 4796 scope.go:117] "RemoveContainer" containerID="b2829af3b80ee79eee226e9d15ead0acde2b876138c09b6ef5f69f2edf7d2ba0" Nov 25 15:10:50 crc kubenswrapper[4796]: I1125 15:10:50.658989 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5"} Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.599179 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rlqvw"] Nov 25 15:11:17 crc kubenswrapper[4796]: E1125 15:11:17.600190 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc0642f-5868-4241-a027-a9cd7e401962" containerName="extract-utilities" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.600204 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc0642f-5868-4241-a027-a9cd7e401962" containerName="extract-utilities" Nov 25 15:11:17 crc kubenswrapper[4796]: E1125 15:11:17.600216 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc0642f-5868-4241-a027-a9cd7e401962" containerName="registry-server" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.600226 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc0642f-5868-4241-a027-a9cd7e401962" containerName="registry-server" Nov 25 15:11:17 crc kubenswrapper[4796]: E1125 15:11:17.600266 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc0642f-5868-4241-a027-a9cd7e401962" containerName="extract-content" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.600273 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc0642f-5868-4241-a027-a9cd7e401962" containerName="extract-content" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.600529 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc0642f-5868-4241-a027-a9cd7e401962" containerName="registry-server" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.602146 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.611993 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlqvw"] Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.667533 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-catalog-content\") pod \"redhat-marketplace-rlqvw\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.668052 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-utilities\") pod \"redhat-marketplace-rlqvw\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.668089 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlhdx\" (UniqueName: \"kubernetes.io/projected/5f72624c-5dc2-444a-a328-e65f5b741aa9-kube-api-access-mlhdx\") pod \"redhat-marketplace-rlqvw\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.768939 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-catalog-content\") pod \"redhat-marketplace-rlqvw\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.768988 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-utilities\") pod \"redhat-marketplace-rlqvw\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.769008 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlhdx\" (UniqueName: \"kubernetes.io/projected/5f72624c-5dc2-444a-a328-e65f5b741aa9-kube-api-access-mlhdx\") pod \"redhat-marketplace-rlqvw\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.769521 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-catalog-content\") pod \"redhat-marketplace-rlqvw\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.769563 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-utilities\") pod \"redhat-marketplace-rlqvw\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.801888 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlhdx\" (UniqueName: \"kubernetes.io/projected/5f72624c-5dc2-444a-a328-e65f5b741aa9-kube-api-access-mlhdx\") pod \"redhat-marketplace-rlqvw\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:17 crc kubenswrapper[4796]: I1125 15:11:17.932459 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:18 crc kubenswrapper[4796]: I1125 15:11:18.373118 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlqvw"] Nov 25 15:11:18 crc kubenswrapper[4796]: W1125 15:11:18.382327 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f72624c_5dc2_444a_a328_e65f5b741aa9.slice/crio-999834aca0ec3069bc51d7879701453a9dceb57f1bf8ace6eaa038fc43342a55 WatchSource:0}: Error finding container 999834aca0ec3069bc51d7879701453a9dceb57f1bf8ace6eaa038fc43342a55: Status 404 returned error can't find the container with id 999834aca0ec3069bc51d7879701453a9dceb57f1bf8ace6eaa038fc43342a55 Nov 25 15:11:18 crc kubenswrapper[4796]: I1125 15:11:18.934118 4796 generic.go:334] "Generic (PLEG): container finished" podID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerID="3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f" exitCode=0 Nov 25 15:11:18 crc kubenswrapper[4796]: I1125 15:11:18.934223 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlqvw" event={"ID":"5f72624c-5dc2-444a-a328-e65f5b741aa9","Type":"ContainerDied","Data":"3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f"} Nov 25 15:11:18 crc kubenswrapper[4796]: I1125 15:11:18.934505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlqvw" event={"ID":"5f72624c-5dc2-444a-a328-e65f5b741aa9","Type":"ContainerStarted","Data":"999834aca0ec3069bc51d7879701453a9dceb57f1bf8ace6eaa038fc43342a55"} Nov 25 15:11:20 crc kubenswrapper[4796]: I1125 15:11:20.958439 4796 generic.go:334] "Generic (PLEG): container finished" podID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerID="a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7" exitCode=0 Nov 25 15:11:20 crc kubenswrapper[4796]: I1125 15:11:20.958619 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlqvw" event={"ID":"5f72624c-5dc2-444a-a328-e65f5b741aa9","Type":"ContainerDied","Data":"a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7"} Nov 25 15:11:21 crc kubenswrapper[4796]: I1125 15:11:21.970092 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlqvw" event={"ID":"5f72624c-5dc2-444a-a328-e65f5b741aa9","Type":"ContainerStarted","Data":"132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca"} Nov 25 15:11:21 crc kubenswrapper[4796]: I1125 15:11:21.996815 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rlqvw" podStartSLOduration=2.527336236 podStartE2EDuration="4.99678559s" podCreationTimestamp="2025-11-25 15:11:17 +0000 UTC" firstStartedPulling="2025-11-25 15:11:18.936016407 +0000 UTC m=+2807.279125841" lastFinishedPulling="2025-11-25 15:11:21.405465731 +0000 UTC m=+2809.748575195" observedRunningTime="2025-11-25 15:11:21.98750209 +0000 UTC m=+2810.330611534" watchObservedRunningTime="2025-11-25 15:11:21.99678559 +0000 UTC m=+2810.339895014" Nov 25 15:11:27 crc kubenswrapper[4796]: I1125 15:11:27.932750 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:27 crc kubenswrapper[4796]: I1125 15:11:27.933302 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:27 crc kubenswrapper[4796]: I1125 15:11:27.986547 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:28 crc kubenswrapper[4796]: I1125 15:11:28.074928 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:28 crc kubenswrapper[4796]: I1125 15:11:28.217361 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlqvw"] Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.040557 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rlqvw" podUID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerName="registry-server" containerID="cri-o://132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca" gracePeriod=2 Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.491182 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.614509 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-catalog-content\") pod \"5f72624c-5dc2-444a-a328-e65f5b741aa9\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.614644 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-utilities\") pod \"5f72624c-5dc2-444a-a328-e65f5b741aa9\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.614708 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlhdx\" (UniqueName: \"kubernetes.io/projected/5f72624c-5dc2-444a-a328-e65f5b741aa9-kube-api-access-mlhdx\") pod \"5f72624c-5dc2-444a-a328-e65f5b741aa9\" (UID: \"5f72624c-5dc2-444a-a328-e65f5b741aa9\") " Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.615548 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-utilities" (OuterVolumeSpecName: "utilities") pod "5f72624c-5dc2-444a-a328-e65f5b741aa9" (UID: "5f72624c-5dc2-444a-a328-e65f5b741aa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.623031 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f72624c-5dc2-444a-a328-e65f5b741aa9-kube-api-access-mlhdx" (OuterVolumeSpecName: "kube-api-access-mlhdx") pod "5f72624c-5dc2-444a-a328-e65f5b741aa9" (UID: "5f72624c-5dc2-444a-a328-e65f5b741aa9"). InnerVolumeSpecName "kube-api-access-mlhdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.689696 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f72624c-5dc2-444a-a328-e65f5b741aa9" (UID: "5f72624c-5dc2-444a-a328-e65f5b741aa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.717143 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.717184 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f72624c-5dc2-444a-a328-e65f5b741aa9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:11:30 crc kubenswrapper[4796]: I1125 15:11:30.717193 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlhdx\" (UniqueName: \"kubernetes.io/projected/5f72624c-5dc2-444a-a328-e65f5b741aa9-kube-api-access-mlhdx\") on node \"crc\" DevicePath \"\"" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.051789 4796 generic.go:334] "Generic (PLEG): container finished" podID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerID="132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca" exitCode=0 Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.051866 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlqvw" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.051894 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlqvw" event={"ID":"5f72624c-5dc2-444a-a328-e65f5b741aa9","Type":"ContainerDied","Data":"132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca"} Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.052833 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlqvw" event={"ID":"5f72624c-5dc2-444a-a328-e65f5b741aa9","Type":"ContainerDied","Data":"999834aca0ec3069bc51d7879701453a9dceb57f1bf8ace6eaa038fc43342a55"} Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.052873 4796 scope.go:117] "RemoveContainer" containerID="132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.081861 4796 scope.go:117] "RemoveContainer" containerID="a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.095029 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlqvw"] Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.103779 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlqvw"] Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.112902 4796 scope.go:117] "RemoveContainer" containerID="3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.150280 4796 scope.go:117] "RemoveContainer" containerID="132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca" Nov 25 15:11:31 crc kubenswrapper[4796]: E1125 15:11:31.150915 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca\": container with ID starting with 132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca not found: ID does not exist" containerID="132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.150953 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca"} err="failed to get container status \"132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca\": rpc error: code = NotFound desc = could not find container \"132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca\": container with ID starting with 132e7190c93f9f35c408aeb01795195fc0592154491009ebf546421d2f3ea5ca not found: ID does not exist" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.150984 4796 scope.go:117] "RemoveContainer" containerID="a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7" Nov 25 15:11:31 crc kubenswrapper[4796]: E1125 15:11:31.151446 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7\": container with ID starting with a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7 not found: ID does not exist" containerID="a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.151473 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7"} err="failed to get container status \"a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7\": rpc error: code = NotFound desc = could not find container \"a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7\": container with ID starting with a116b670e6be21aa42ad65d35d0cd36eb54e1831be604e282151ee627f84c6e7 not found: ID does not exist" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.151489 4796 scope.go:117] "RemoveContainer" containerID="3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f" Nov 25 15:11:31 crc kubenswrapper[4796]: E1125 15:11:31.151932 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f\": container with ID starting with 3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f not found: ID does not exist" containerID="3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f" Nov 25 15:11:31 crc kubenswrapper[4796]: I1125 15:11:31.151958 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f"} err="failed to get container status \"3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f\": rpc error: code = NotFound desc = could not find container \"3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f\": container with ID starting with 3ea44422668fc53b7fd2fa9445a7c55bcf35e7dd78221a6d9822fe2ecd9cb61f not found: ID does not exist" Nov 25 15:11:32 crc kubenswrapper[4796]: I1125 15:11:32.425366 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f72624c-5dc2-444a-a328-e65f5b741aa9" path="/var/lib/kubelet/pods/5f72624c-5dc2-444a-a328-e65f5b741aa9/volumes" Nov 25 15:11:58 crc kubenswrapper[4796]: I1125 15:11:58.304645 4796 generic.go:334] "Generic (PLEG): container finished" podID="885ec954-19ea-488f-badc-9dc879859a45" containerID="f7d262f7e9ec28ba3c7181c3936e9b58a7fc2098702e959b7299bf661f66c7c4" exitCode=0 Nov 25 15:11:58 crc kubenswrapper[4796]: I1125 15:11:58.304741 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" event={"ID":"885ec954-19ea-488f-badc-9dc879859a45","Type":"ContainerDied","Data":"f7d262f7e9ec28ba3c7181c3936e9b58a7fc2098702e959b7299bf661f66c7c4"} Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.811138 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.981837 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ssh-key\") pod \"885ec954-19ea-488f-badc-9dc879859a45\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.982167 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-inventory\") pod \"885ec954-19ea-488f-badc-9dc879859a45\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.982191 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-telemetry-combined-ca-bundle\") pod \"885ec954-19ea-488f-badc-9dc879859a45\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.982232 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-0\") pod \"885ec954-19ea-488f-badc-9dc879859a45\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.982285 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkjrg\" (UniqueName: \"kubernetes.io/projected/885ec954-19ea-488f-badc-9dc879859a45-kube-api-access-vkjrg\") pod \"885ec954-19ea-488f-badc-9dc879859a45\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.982358 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-2\") pod \"885ec954-19ea-488f-badc-9dc879859a45\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.982440 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-1\") pod \"885ec954-19ea-488f-badc-9dc879859a45\" (UID: \"885ec954-19ea-488f-badc-9dc879859a45\") " Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.993416 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "885ec954-19ea-488f-badc-9dc879859a45" (UID: "885ec954-19ea-488f-badc-9dc879859a45"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:11:59 crc kubenswrapper[4796]: I1125 15:11:59.996254 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885ec954-19ea-488f-badc-9dc879859a45-kube-api-access-vkjrg" (OuterVolumeSpecName: "kube-api-access-vkjrg") pod "885ec954-19ea-488f-badc-9dc879859a45" (UID: "885ec954-19ea-488f-badc-9dc879859a45"). InnerVolumeSpecName "kube-api-access-vkjrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.012161 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "885ec954-19ea-488f-badc-9dc879859a45" (UID: "885ec954-19ea-488f-badc-9dc879859a45"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.015196 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "885ec954-19ea-488f-badc-9dc879859a45" (UID: "885ec954-19ea-488f-badc-9dc879859a45"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.018075 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "885ec954-19ea-488f-badc-9dc879859a45" (UID: "885ec954-19ea-488f-badc-9dc879859a45"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.020179 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-inventory" (OuterVolumeSpecName: "inventory") pod "885ec954-19ea-488f-badc-9dc879859a45" (UID: "885ec954-19ea-488f-badc-9dc879859a45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.030009 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "885ec954-19ea-488f-badc-9dc879859a45" (UID: "885ec954-19ea-488f-badc-9dc879859a45"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.085404 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.085456 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.085476 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.085496 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.085514 4796 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.085533 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/885ec954-19ea-488f-badc-9dc879859a45-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.085549 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkjrg\" (UniqueName: \"kubernetes.io/projected/885ec954-19ea-488f-badc-9dc879859a45-kube-api-access-vkjrg\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.324564 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.324621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-99788" event={"ID":"885ec954-19ea-488f-badc-9dc879859a45","Type":"ContainerDied","Data":"7588504d4ae4358ef7c027ba830726a4799ab022af2ae908c032372a6de995d6"} Nov 25 15:12:00 crc kubenswrapper[4796]: I1125 15:12:00.324660 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7588504d4ae4358ef7c027ba830726a4799ab022af2ae908c032372a6de995d6" Nov 25 15:12:49 crc kubenswrapper[4796]: I1125 15:12:49.513538 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:12:49 crc kubenswrapper[4796]: I1125 15:12:49.514162 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.041160 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 15:12:54 crc kubenswrapper[4796]: E1125 15:12:54.042318 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerName="registry-server" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.042342 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerName="registry-server" Nov 25 15:12:54 crc kubenswrapper[4796]: E1125 15:12:54.042392 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885ec954-19ea-488f-badc-9dc879859a45" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.042407 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="885ec954-19ea-488f-badc-9dc879859a45" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 15:12:54 crc kubenswrapper[4796]: E1125 15:12:54.042431 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerName="extract-utilities" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.042444 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerName="extract-utilities" Nov 25 15:12:54 crc kubenswrapper[4796]: E1125 15:12:54.042472 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerName="extract-content" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.042485 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerName="extract-content" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.042866 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="885ec954-19ea-488f-badc-9dc879859a45" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.042888 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f72624c-5dc2-444a-a328-e65f5b741aa9" containerName="registry-server" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.043857 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.046607 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.046624 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.047921 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.049036 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-s6pct" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.054398 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.191956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.192324 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.192379 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.192416 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.192498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.192630 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cd9r\" (UniqueName: \"kubernetes.io/projected/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-kube-api-access-2cd9r\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.192685 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-config-data\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.192942 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.193001 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.295427 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.295489 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.295567 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cd9r\" (UniqueName: \"kubernetes.io/projected/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-kube-api-access-2cd9r\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.295662 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-config-data\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.295767 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.295790 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.295905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.295929 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.295974 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.296008 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.296350 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.301429 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.302104 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.302587 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-config-data\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.305309 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.307184 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.307932 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.320652 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cd9r\" (UniqueName: \"kubernetes.io/projected/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-kube-api-access-2cd9r\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.323555 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.395918 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.851246 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.864715 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:12:54 crc kubenswrapper[4796]: I1125 15:12:54.888598 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6","Type":"ContainerStarted","Data":"56b6ae5c9c523e3936ef637fa7ffb3e3f796d34e001b197b030833ec1d1cf0ee"} Nov 25 15:13:19 crc kubenswrapper[4796]: I1125 15:13:19.514265 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:13:19 crc kubenswrapper[4796]: I1125 15:13:19.515984 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:13:23 crc kubenswrapper[4796]: E1125 15:13:23.799094 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 25 15:13:23 crc kubenswrapper[4796]: E1125 15:13:23.799737 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cd9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:23 crc kubenswrapper[4796]: E1125 15:13:23.800998 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" Nov 25 15:13:24 crc kubenswrapper[4796]: E1125 15:13:24.188535 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" Nov 25 15:13:37 crc kubenswrapper[4796]: I1125 15:13:37.925152 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 15:13:39 crc kubenswrapper[4796]: I1125 15:13:39.346466 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6","Type":"ContainerStarted","Data":"b8dc38954f622176b2e8914db7e48653c211deb6e5ecb277d5e8a6154c085740"} Nov 25 15:13:39 crc kubenswrapper[4796]: I1125 15:13:39.370817 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.312478659 podStartE2EDuration="46.370790223s" podCreationTimestamp="2025-11-25 15:12:53 +0000 UTC" firstStartedPulling="2025-11-25 15:12:54.864233876 +0000 UTC m=+2903.207343300" lastFinishedPulling="2025-11-25 15:13:37.92254543 +0000 UTC m=+2946.265654864" observedRunningTime="2025-11-25 15:13:39.362792032 +0000 UTC m=+2947.705901466" watchObservedRunningTime="2025-11-25 15:13:39.370790223 +0000 UTC m=+2947.713899667" Nov 25 15:13:49 crc kubenswrapper[4796]: I1125 15:13:49.513991 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:13:49 crc kubenswrapper[4796]: I1125 15:13:49.514760 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:13:49 crc kubenswrapper[4796]: I1125 15:13:49.514834 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 15:13:49 crc kubenswrapper[4796]: I1125 15:13:49.515932 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:13:49 crc kubenswrapper[4796]: I1125 15:13:49.516022 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" gracePeriod=600 Nov 25 15:13:49 crc kubenswrapper[4796]: E1125 15:13:49.639712 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:13:50 crc kubenswrapper[4796]: I1125 15:13:50.462498 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5"} Nov 25 15:13:50 crc kubenswrapper[4796]: I1125 15:13:50.462461 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" exitCode=0 Nov 25 15:13:50 crc kubenswrapper[4796]: I1125 15:13:50.463103 4796 scope.go:117] "RemoveContainer" containerID="90cd350136ae8324b154646518487385fea8d658259347ebba5e0c7e669bf9e5" Nov 25 15:13:50 crc kubenswrapper[4796]: I1125 15:13:50.464035 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:13:50 crc kubenswrapper[4796]: E1125 15:13:50.464504 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:14:05 crc kubenswrapper[4796]: I1125 15:14:05.409671 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:14:05 crc kubenswrapper[4796]: E1125 15:14:05.410439 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:14:17 crc kubenswrapper[4796]: I1125 15:14:17.410751 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:14:17 crc kubenswrapper[4796]: E1125 15:14:17.411861 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:14:31 crc kubenswrapper[4796]: I1125 15:14:31.409379 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:14:31 crc kubenswrapper[4796]: E1125 15:14:31.410461 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:14:46 crc kubenswrapper[4796]: I1125 15:14:46.409267 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:14:46 crc kubenswrapper[4796]: E1125 15:14:46.410041 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:14:57 crc kubenswrapper[4796]: I1125 15:14:57.409684 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:14:57 crc kubenswrapper[4796]: E1125 15:14:57.410387 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.144390 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8"] Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.146422 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.149162 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.153287 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.157767 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8"] Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.219770 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/584e7ae4-c844-4089-8195-d5df833ac9b8-secret-volume\") pod \"collect-profiles-29401395-bbzr8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.220052 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br58z\" (UniqueName: \"kubernetes.io/projected/584e7ae4-c844-4089-8195-d5df833ac9b8-kube-api-access-br58z\") pod \"collect-profiles-29401395-bbzr8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.220217 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/584e7ae4-c844-4089-8195-d5df833ac9b8-config-volume\") pod \"collect-profiles-29401395-bbzr8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.322055 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/584e7ae4-c844-4089-8195-d5df833ac9b8-secret-volume\") pod \"collect-profiles-29401395-bbzr8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.322106 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br58z\" (UniqueName: \"kubernetes.io/projected/584e7ae4-c844-4089-8195-d5df833ac9b8-kube-api-access-br58z\") pod \"collect-profiles-29401395-bbzr8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.322175 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/584e7ae4-c844-4089-8195-d5df833ac9b8-config-volume\") pod \"collect-profiles-29401395-bbzr8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.322980 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/584e7ae4-c844-4089-8195-d5df833ac9b8-config-volume\") pod \"collect-profiles-29401395-bbzr8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.332688 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/584e7ae4-c844-4089-8195-d5df833ac9b8-secret-volume\") pod \"collect-profiles-29401395-bbzr8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.349598 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br58z\" (UniqueName: \"kubernetes.io/projected/584e7ae4-c844-4089-8195-d5df833ac9b8-kube-api-access-br58z\") pod \"collect-profiles-29401395-bbzr8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.466500 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:00 crc kubenswrapper[4796]: I1125 15:15:00.923992 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8"] Nov 25 15:15:01 crc kubenswrapper[4796]: I1125 15:15:01.255874 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" event={"ID":"584e7ae4-c844-4089-8195-d5df833ac9b8","Type":"ContainerStarted","Data":"c918120aaf79831af2f3150d3246e0fb177f9bde40cd12bf9d918c922ae50c2d"} Nov 25 15:15:01 crc kubenswrapper[4796]: I1125 15:15:01.255938 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" event={"ID":"584e7ae4-c844-4089-8195-d5df833ac9b8","Type":"ContainerStarted","Data":"bb3a7d4c7c0c1eb9821552030359cde5e59ceefd9986116a411d030ea16dafd2"} Nov 25 15:15:01 crc kubenswrapper[4796]: I1125 15:15:01.298300 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" podStartSLOduration=1.298241334 podStartE2EDuration="1.298241334s" podCreationTimestamp="2025-11-25 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:01.26914703 +0000 UTC m=+3029.612256464" watchObservedRunningTime="2025-11-25 15:15:01.298241334 +0000 UTC m=+3029.641350758" Nov 25 15:15:02 crc kubenswrapper[4796]: I1125 15:15:02.267405 4796 generic.go:334] "Generic (PLEG): container finished" podID="584e7ae4-c844-4089-8195-d5df833ac9b8" containerID="c918120aaf79831af2f3150d3246e0fb177f9bde40cd12bf9d918c922ae50c2d" exitCode=0 Nov 25 15:15:02 crc kubenswrapper[4796]: I1125 15:15:02.267475 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" event={"ID":"584e7ae4-c844-4089-8195-d5df833ac9b8","Type":"ContainerDied","Data":"c918120aaf79831af2f3150d3246e0fb177f9bde40cd12bf9d918c922ae50c2d"} Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.701219 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.790830 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/584e7ae4-c844-4089-8195-d5df833ac9b8-secret-volume\") pod \"584e7ae4-c844-4089-8195-d5df833ac9b8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.790897 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/584e7ae4-c844-4089-8195-d5df833ac9b8-config-volume\") pod \"584e7ae4-c844-4089-8195-d5df833ac9b8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.791227 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br58z\" (UniqueName: \"kubernetes.io/projected/584e7ae4-c844-4089-8195-d5df833ac9b8-kube-api-access-br58z\") pod \"584e7ae4-c844-4089-8195-d5df833ac9b8\" (UID: \"584e7ae4-c844-4089-8195-d5df833ac9b8\") " Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.791762 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/584e7ae4-c844-4089-8195-d5df833ac9b8-config-volume" (OuterVolumeSpecName: "config-volume") pod "584e7ae4-c844-4089-8195-d5df833ac9b8" (UID: "584e7ae4-c844-4089-8195-d5df833ac9b8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.797924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e7ae4-c844-4089-8195-d5df833ac9b8-kube-api-access-br58z" (OuterVolumeSpecName: "kube-api-access-br58z") pod "584e7ae4-c844-4089-8195-d5df833ac9b8" (UID: "584e7ae4-c844-4089-8195-d5df833ac9b8"). InnerVolumeSpecName "kube-api-access-br58z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.798804 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584e7ae4-c844-4089-8195-d5df833ac9b8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "584e7ae4-c844-4089-8195-d5df833ac9b8" (UID: "584e7ae4-c844-4089-8195-d5df833ac9b8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.893485 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br58z\" (UniqueName: \"kubernetes.io/projected/584e7ae4-c844-4089-8195-d5df833ac9b8-kube-api-access-br58z\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.893598 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/584e7ae4-c844-4089-8195-d5df833ac9b8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:03 crc kubenswrapper[4796]: I1125 15:15:03.893622 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/584e7ae4-c844-4089-8195-d5df833ac9b8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4796]: I1125 15:15:04.290519 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" event={"ID":"584e7ae4-c844-4089-8195-d5df833ac9b8","Type":"ContainerDied","Data":"bb3a7d4c7c0c1eb9821552030359cde5e59ceefd9986116a411d030ea16dafd2"} Nov 25 15:15:04 crc kubenswrapper[4796]: I1125 15:15:04.290560 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb3a7d4c7c0c1eb9821552030359cde5e59ceefd9986116a411d030ea16dafd2" Nov 25 15:15:04 crc kubenswrapper[4796]: I1125 15:15:04.290637 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-bbzr8" Nov 25 15:15:04 crc kubenswrapper[4796]: I1125 15:15:04.358661 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv"] Nov 25 15:15:04 crc kubenswrapper[4796]: I1125 15:15:04.365998 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401350-ktltv"] Nov 25 15:15:04 crc kubenswrapper[4796]: I1125 15:15:04.422913 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3678f38-14e6-4551-855d-271f89aeaf3b" path="/var/lib/kubelet/pods/b3678f38-14e6-4551-855d-271f89aeaf3b/volumes" Nov 25 15:15:09 crc kubenswrapper[4796]: I1125 15:15:09.410164 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:15:09 crc kubenswrapper[4796]: E1125 15:15:09.411475 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:15:24 crc kubenswrapper[4796]: I1125 15:15:24.409191 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:15:24 crc kubenswrapper[4796]: E1125 15:15:24.410541 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:15:36 crc kubenswrapper[4796]: I1125 15:15:36.410117 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:15:36 crc kubenswrapper[4796]: E1125 15:15:36.411676 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:15:48 crc kubenswrapper[4796]: I1125 15:15:48.771878 4796 scope.go:117] "RemoveContainer" containerID="e0189a49dfa3c8639c71a8ca067188b734ab3305a564f607530ea74535c33ebf" Nov 25 15:15:51 crc kubenswrapper[4796]: I1125 15:15:51.409202 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:15:51 crc kubenswrapper[4796]: E1125 15:15:51.410009 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:16:03 crc kubenswrapper[4796]: I1125 15:16:03.410137 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:16:03 crc kubenswrapper[4796]: E1125 15:16:03.412426 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.032260 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6gw2d"] Nov 25 15:16:04 crc kubenswrapper[4796]: E1125 15:16:04.032689 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584e7ae4-c844-4089-8195-d5df833ac9b8" containerName="collect-profiles" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.032703 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="584e7ae4-c844-4089-8195-d5df833ac9b8" containerName="collect-profiles" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.032880 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="584e7ae4-c844-4089-8195-d5df833ac9b8" containerName="collect-profiles" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.034123 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.056675 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gw2d"] Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.124336 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxhs4\" (UniqueName: \"kubernetes.io/projected/a58268ef-bc08-43ce-a11a-dda18dbe317e-kube-api-access-pxhs4\") pod \"certified-operators-6gw2d\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.124437 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-catalog-content\") pod \"certified-operators-6gw2d\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.124467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-utilities\") pod \"certified-operators-6gw2d\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.227782 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-catalog-content\") pod \"certified-operators-6gw2d\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.227849 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-utilities\") pod \"certified-operators-6gw2d\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.227976 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxhs4\" (UniqueName: \"kubernetes.io/projected/a58268ef-bc08-43ce-a11a-dda18dbe317e-kube-api-access-pxhs4\") pod \"certified-operators-6gw2d\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.228672 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-utilities\") pod \"certified-operators-6gw2d\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.228756 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-catalog-content\") pod \"certified-operators-6gw2d\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.245839 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9c8v7"] Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.247723 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.250305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxhs4\" (UniqueName: \"kubernetes.io/projected/a58268ef-bc08-43ce-a11a-dda18dbe317e-kube-api-access-pxhs4\") pod \"certified-operators-6gw2d\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.261985 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9c8v7"] Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.329734 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-utilities\") pod \"community-operators-9c8v7\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.330076 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cqgh\" (UniqueName: \"kubernetes.io/projected/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-kube-api-access-6cqgh\") pod \"community-operators-9c8v7\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.330273 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-catalog-content\") pod \"community-operators-9c8v7\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.368664 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.432108 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cqgh\" (UniqueName: \"kubernetes.io/projected/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-kube-api-access-6cqgh\") pod \"community-operators-9c8v7\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.434838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-catalog-content\") pod \"community-operators-9c8v7\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.436863 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-catalog-content\") pod \"community-operators-9c8v7\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.434879 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-utilities\") pod \"community-operators-9c8v7\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.437980 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-utilities\") pod \"community-operators-9c8v7\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.450355 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cqgh\" (UniqueName: \"kubernetes.io/projected/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-kube-api-access-6cqgh\") pod \"community-operators-9c8v7\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:04 crc kubenswrapper[4796]: I1125 15:16:04.624433 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:05 crc kubenswrapper[4796]: I1125 15:16:05.089370 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6gw2d"] Nov 25 15:16:05 crc kubenswrapper[4796]: W1125 15:16:05.185325 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bdb5f1e_90ba_4f2a_9515_61c103291ec8.slice/crio-2b19e298d08c90eb4cc72fc0801aeb9b1c6b076904f2382c63b7e3bdf94508cb WatchSource:0}: Error finding container 2b19e298d08c90eb4cc72fc0801aeb9b1c6b076904f2382c63b7e3bdf94508cb: Status 404 returned error can't find the container with id 2b19e298d08c90eb4cc72fc0801aeb9b1c6b076904f2382c63b7e3bdf94508cb Nov 25 15:16:05 crc kubenswrapper[4796]: I1125 15:16:05.185879 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9c8v7"] Nov 25 15:16:05 crc kubenswrapper[4796]: I1125 15:16:05.899687 4796 generic.go:334] "Generic (PLEG): container finished" podID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerID="301e22b48bb1e850897458e34bdf07751827eee8ccdde7c486e8d0afa7e6a3e6" exitCode=0 Nov 25 15:16:05 crc kubenswrapper[4796]: I1125 15:16:05.899955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8v7" event={"ID":"5bdb5f1e-90ba-4f2a-9515-61c103291ec8","Type":"ContainerDied","Data":"301e22b48bb1e850897458e34bdf07751827eee8ccdde7c486e8d0afa7e6a3e6"} Nov 25 15:16:05 crc kubenswrapper[4796]: I1125 15:16:05.900178 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8v7" event={"ID":"5bdb5f1e-90ba-4f2a-9515-61c103291ec8","Type":"ContainerStarted","Data":"2b19e298d08c90eb4cc72fc0801aeb9b1c6b076904f2382c63b7e3bdf94508cb"} Nov 25 15:16:05 crc kubenswrapper[4796]: I1125 15:16:05.907370 4796 generic.go:334] "Generic (PLEG): container finished" podID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerID="6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac" exitCode=0 Nov 25 15:16:05 crc kubenswrapper[4796]: I1125 15:16:05.907414 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gw2d" event={"ID":"a58268ef-bc08-43ce-a11a-dda18dbe317e","Type":"ContainerDied","Data":"6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac"} Nov 25 15:16:05 crc kubenswrapper[4796]: I1125 15:16:05.907444 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gw2d" event={"ID":"a58268ef-bc08-43ce-a11a-dda18dbe317e","Type":"ContainerStarted","Data":"a3588a0ce7e6a863b89f6ac230d728f0f9974f06f1c6053cf387c2180383f5fe"} Nov 25 15:16:06 crc kubenswrapper[4796]: I1125 15:16:06.919383 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8v7" event={"ID":"5bdb5f1e-90ba-4f2a-9515-61c103291ec8","Type":"ContainerStarted","Data":"e5b1e4640402a156a60dde08b7a51051bab6a62514c5c57916d4cf7593b57c9d"} Nov 25 15:16:06 crc kubenswrapper[4796]: I1125 15:16:06.922663 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gw2d" event={"ID":"a58268ef-bc08-43ce-a11a-dda18dbe317e","Type":"ContainerStarted","Data":"935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf"} Nov 25 15:16:07 crc kubenswrapper[4796]: I1125 15:16:07.935163 4796 generic.go:334] "Generic (PLEG): container finished" podID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerID="e5b1e4640402a156a60dde08b7a51051bab6a62514c5c57916d4cf7593b57c9d" exitCode=0 Nov 25 15:16:07 crc kubenswrapper[4796]: I1125 15:16:07.935311 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8v7" event={"ID":"5bdb5f1e-90ba-4f2a-9515-61c103291ec8","Type":"ContainerDied","Data":"e5b1e4640402a156a60dde08b7a51051bab6a62514c5c57916d4cf7593b57c9d"} Nov 25 15:16:09 crc kubenswrapper[4796]: I1125 15:16:09.956559 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8v7" event={"ID":"5bdb5f1e-90ba-4f2a-9515-61c103291ec8","Type":"ContainerStarted","Data":"3176a0fbbeb81a805fab5f4166625d786dfcc039b94d04ce6efd92efbc5f6865"} Nov 25 15:16:09 crc kubenswrapper[4796]: I1125 15:16:09.959046 4796 generic.go:334] "Generic (PLEG): container finished" podID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerID="935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf" exitCode=0 Nov 25 15:16:09 crc kubenswrapper[4796]: I1125 15:16:09.959086 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gw2d" event={"ID":"a58268ef-bc08-43ce-a11a-dda18dbe317e","Type":"ContainerDied","Data":"935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf"} Nov 25 15:16:09 crc kubenswrapper[4796]: I1125 15:16:09.976674 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9c8v7" podStartSLOduration=2.768816554 podStartE2EDuration="5.976660729s" podCreationTimestamp="2025-11-25 15:16:04 +0000 UTC" firstStartedPulling="2025-11-25 15:16:05.906721964 +0000 UTC m=+3094.249831398" lastFinishedPulling="2025-11-25 15:16:09.114566149 +0000 UTC m=+3097.457675573" observedRunningTime="2025-11-25 15:16:09.973071636 +0000 UTC m=+3098.316181070" watchObservedRunningTime="2025-11-25 15:16:09.976660729 +0000 UTC m=+3098.319770153" Nov 25 15:16:10 crc kubenswrapper[4796]: I1125 15:16:10.970234 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gw2d" event={"ID":"a58268ef-bc08-43ce-a11a-dda18dbe317e","Type":"ContainerStarted","Data":"5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44"} Nov 25 15:16:10 crc kubenswrapper[4796]: I1125 15:16:10.994364 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6gw2d" podStartSLOduration=2.441341946 podStartE2EDuration="6.994337385s" podCreationTimestamp="2025-11-25 15:16:04 +0000 UTC" firstStartedPulling="2025-11-25 15:16:05.911157243 +0000 UTC m=+3094.254266677" lastFinishedPulling="2025-11-25 15:16:10.464152692 +0000 UTC m=+3098.807262116" observedRunningTime="2025-11-25 15:16:10.990198466 +0000 UTC m=+3099.333307910" watchObservedRunningTime="2025-11-25 15:16:10.994337385 +0000 UTC m=+3099.337446809" Nov 25 15:16:14 crc kubenswrapper[4796]: I1125 15:16:14.369187 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:14 crc kubenswrapper[4796]: I1125 15:16:14.369826 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:14 crc kubenswrapper[4796]: I1125 15:16:14.428745 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:14 crc kubenswrapper[4796]: I1125 15:16:14.624858 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:14 crc kubenswrapper[4796]: I1125 15:16:14.624943 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:14 crc kubenswrapper[4796]: I1125 15:16:14.716333 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:15 crc kubenswrapper[4796]: I1125 15:16:15.060271 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:15 crc kubenswrapper[4796]: I1125 15:16:15.075802 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:15 crc kubenswrapper[4796]: I1125 15:16:15.617154 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gw2d"] Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.018640 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9c8v7"] Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.026161 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9c8v7" podUID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerName="registry-server" containerID="cri-o://3176a0fbbeb81a805fab5f4166625d786dfcc039b94d04ce6efd92efbc5f6865" gracePeriod=2 Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.026335 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6gw2d" podUID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerName="registry-server" containerID="cri-o://5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44" gracePeriod=2 Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.409308 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:16:17 crc kubenswrapper[4796]: E1125 15:16:17.409933 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:16:17 crc kubenswrapper[4796]: E1125 15:16:17.514248 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda58268ef_bc08_43ce_a11a_dda18dbe317e.slice/crio-5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda58268ef_bc08_43ce_a11a_dda18dbe317e.slice/crio-conmon-5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bdb5f1e_90ba_4f2a_9515_61c103291ec8.slice/crio-conmon-3176a0fbbeb81a805fab5f4166625d786dfcc039b94d04ce6efd92efbc5f6865.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.730212 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.823929 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-utilities\") pod \"a58268ef-bc08-43ce-a11a-dda18dbe317e\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.824000 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-catalog-content\") pod \"a58268ef-bc08-43ce-a11a-dda18dbe317e\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.824033 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxhs4\" (UniqueName: \"kubernetes.io/projected/a58268ef-bc08-43ce-a11a-dda18dbe317e-kube-api-access-pxhs4\") pod \"a58268ef-bc08-43ce-a11a-dda18dbe317e\" (UID: \"a58268ef-bc08-43ce-a11a-dda18dbe317e\") " Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.824681 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-utilities" (OuterVolumeSpecName: "utilities") pod "a58268ef-bc08-43ce-a11a-dda18dbe317e" (UID: "a58268ef-bc08-43ce-a11a-dda18dbe317e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.845900 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a58268ef-bc08-43ce-a11a-dda18dbe317e-kube-api-access-pxhs4" (OuterVolumeSpecName: "kube-api-access-pxhs4") pod "a58268ef-bc08-43ce-a11a-dda18dbe317e" (UID: "a58268ef-bc08-43ce-a11a-dda18dbe317e"). InnerVolumeSpecName "kube-api-access-pxhs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.871793 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a58268ef-bc08-43ce-a11a-dda18dbe317e" (UID: "a58268ef-bc08-43ce-a11a-dda18dbe317e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.925699 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.925730 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxhs4\" (UniqueName: \"kubernetes.io/projected/a58268ef-bc08-43ce-a11a-dda18dbe317e-kube-api-access-pxhs4\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:17 crc kubenswrapper[4796]: I1125 15:16:17.925742 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a58268ef-bc08-43ce-a11a-dda18dbe317e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.035134 4796 generic.go:334] "Generic (PLEG): container finished" podID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerID="3176a0fbbeb81a805fab5f4166625d786dfcc039b94d04ce6efd92efbc5f6865" exitCode=0 Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.035204 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8v7" event={"ID":"5bdb5f1e-90ba-4f2a-9515-61c103291ec8","Type":"ContainerDied","Data":"3176a0fbbeb81a805fab5f4166625d786dfcc039b94d04ce6efd92efbc5f6865"} Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.035249 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9c8v7" event={"ID":"5bdb5f1e-90ba-4f2a-9515-61c103291ec8","Type":"ContainerDied","Data":"2b19e298d08c90eb4cc72fc0801aeb9b1c6b076904f2382c63b7e3bdf94508cb"} Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.035261 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b19e298d08c90eb4cc72fc0801aeb9b1c6b076904f2382c63b7e3bdf94508cb" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.039103 4796 generic.go:334] "Generic (PLEG): container finished" podID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerID="5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44" exitCode=0 Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.039209 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6gw2d" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.039274 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gw2d" event={"ID":"a58268ef-bc08-43ce-a11a-dda18dbe317e","Type":"ContainerDied","Data":"5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44"} Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.039328 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6gw2d" event={"ID":"a58268ef-bc08-43ce-a11a-dda18dbe317e","Type":"ContainerDied","Data":"a3588a0ce7e6a863b89f6ac230d728f0f9974f06f1c6053cf387c2180383f5fe"} Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.039351 4796 scope.go:117] "RemoveContainer" containerID="5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.039480 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.065296 4796 scope.go:117] "RemoveContainer" containerID="935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.083712 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6gw2d"] Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.091802 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6gw2d"] Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.115246 4796 scope.go:117] "RemoveContainer" containerID="6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.128273 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-catalog-content\") pod \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.128323 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cqgh\" (UniqueName: \"kubernetes.io/projected/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-kube-api-access-6cqgh\") pod \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.128384 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-utilities\") pod \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\" (UID: \"5bdb5f1e-90ba-4f2a-9515-61c103291ec8\") " Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.129260 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-utilities" (OuterVolumeSpecName: "utilities") pod "5bdb5f1e-90ba-4f2a-9515-61c103291ec8" (UID: "5bdb5f1e-90ba-4f2a-9515-61c103291ec8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.132942 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-kube-api-access-6cqgh" (OuterVolumeSpecName: "kube-api-access-6cqgh") pod "5bdb5f1e-90ba-4f2a-9515-61c103291ec8" (UID: "5bdb5f1e-90ba-4f2a-9515-61c103291ec8"). InnerVolumeSpecName "kube-api-access-6cqgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.133681 4796 scope.go:117] "RemoveContainer" containerID="5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44" Nov 25 15:16:18 crc kubenswrapper[4796]: E1125 15:16:18.134312 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44\": container with ID starting with 5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44 not found: ID does not exist" containerID="5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.134360 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44"} err="failed to get container status \"5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44\": rpc error: code = NotFound desc = could not find container \"5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44\": container with ID starting with 5146b535e035e369738223019fc6863ac9ec0d390c834fe75df362f81a29ee44 not found: ID does not exist" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.134392 4796 scope.go:117] "RemoveContainer" containerID="935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf" Nov 25 15:16:18 crc kubenswrapper[4796]: E1125 15:16:18.134773 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf\": container with ID starting with 935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf not found: ID does not exist" containerID="935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.134805 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf"} err="failed to get container status \"935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf\": rpc error: code = NotFound desc = could not find container \"935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf\": container with ID starting with 935c91b84f89e49fce4774c342bdb90eed90f4fb4ff819204b88b0842ae44bcf not found: ID does not exist" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.134825 4796 scope.go:117] "RemoveContainer" containerID="6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac" Nov 25 15:16:18 crc kubenswrapper[4796]: E1125 15:16:18.135102 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac\": container with ID starting with 6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac not found: ID does not exist" containerID="6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.135138 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac"} err="failed to get container status \"6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac\": rpc error: code = NotFound desc = could not find container \"6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac\": container with ID starting with 6edde70b44c13a4c9b23d0671adf0f561202ffbd5edc36a906d8c1b58c075fac not found: ID does not exist" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.181109 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bdb5f1e-90ba-4f2a-9515-61c103291ec8" (UID: "5bdb5f1e-90ba-4f2a-9515-61c103291ec8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.231116 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.231155 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cqgh\" (UniqueName: \"kubernetes.io/projected/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-kube-api-access-6cqgh\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.231171 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bdb5f1e-90ba-4f2a-9515-61c103291ec8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:18 crc kubenswrapper[4796]: I1125 15:16:18.425858 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a58268ef-bc08-43ce-a11a-dda18dbe317e" path="/var/lib/kubelet/pods/a58268ef-bc08-43ce-a11a-dda18dbe317e/volumes" Nov 25 15:16:19 crc kubenswrapper[4796]: I1125 15:16:19.051287 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9c8v7" Nov 25 15:16:19 crc kubenswrapper[4796]: I1125 15:16:19.080610 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9c8v7"] Nov 25 15:16:19 crc kubenswrapper[4796]: I1125 15:16:19.091529 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9c8v7"] Nov 25 15:16:20 crc kubenswrapper[4796]: I1125 15:16:20.420500 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" path="/var/lib/kubelet/pods/5bdb5f1e-90ba-4f2a-9515-61c103291ec8/volumes" Nov 25 15:16:31 crc kubenswrapper[4796]: I1125 15:16:31.409564 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:16:31 crc kubenswrapper[4796]: E1125 15:16:31.412973 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:16:46 crc kubenswrapper[4796]: I1125 15:16:46.410299 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:16:46 crc kubenswrapper[4796]: E1125 15:16:46.411177 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:16:59 crc kubenswrapper[4796]: I1125 15:16:59.409168 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:16:59 crc kubenswrapper[4796]: E1125 15:16:59.410328 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:17:13 crc kubenswrapper[4796]: I1125 15:17:13.409893 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:17:13 crc kubenswrapper[4796]: E1125 15:17:13.410874 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:17:28 crc kubenswrapper[4796]: I1125 15:17:28.410473 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:17:28 crc kubenswrapper[4796]: E1125 15:17:28.412015 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:17:40 crc kubenswrapper[4796]: I1125 15:17:40.410639 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:17:40 crc kubenswrapper[4796]: E1125 15:17:40.411910 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:17:51 crc kubenswrapper[4796]: I1125 15:17:51.409063 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:17:51 crc kubenswrapper[4796]: E1125 15:17:51.410172 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:18:06 crc kubenswrapper[4796]: I1125 15:18:06.410017 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:18:06 crc kubenswrapper[4796]: E1125 15:18:06.410785 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:18:21 crc kubenswrapper[4796]: I1125 15:18:21.410704 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:18:21 crc kubenswrapper[4796]: E1125 15:18:21.411677 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:18:35 crc kubenswrapper[4796]: I1125 15:18:35.409771 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:18:35 crc kubenswrapper[4796]: E1125 15:18:35.410940 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:18:46 crc kubenswrapper[4796]: I1125 15:18:46.410066 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:18:46 crc kubenswrapper[4796]: E1125 15:18:46.411368 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:19:00 crc kubenswrapper[4796]: I1125 15:19:00.409395 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:19:00 crc kubenswrapper[4796]: I1125 15:19:00.887967 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"d05bce62cdff9f1975558d9a6547b9599dacc5553f38f0741095e09998e9b591"} Nov 25 15:21:19 crc kubenswrapper[4796]: I1125 15:21:19.513919 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:21:19 crc kubenswrapper[4796]: I1125 15:21:19.514550 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:21:49 crc kubenswrapper[4796]: I1125 15:21:49.513717 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:21:49 crc kubenswrapper[4796]: I1125 15:21:49.514528 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:22:19 crc kubenswrapper[4796]: I1125 15:22:19.513837 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:22:19 crc kubenswrapper[4796]: I1125 15:22:19.514667 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:22:19 crc kubenswrapper[4796]: I1125 15:22:19.514745 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 15:22:19 crc kubenswrapper[4796]: I1125 15:22:19.515938 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d05bce62cdff9f1975558d9a6547b9599dacc5553f38f0741095e09998e9b591"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:22:19 crc kubenswrapper[4796]: I1125 15:22:19.516061 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://d05bce62cdff9f1975558d9a6547b9599dacc5553f38f0741095e09998e9b591" gracePeriod=600 Nov 25 15:22:20 crc kubenswrapper[4796]: I1125 15:22:20.183469 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="d05bce62cdff9f1975558d9a6547b9599dacc5553f38f0741095e09998e9b591" exitCode=0 Nov 25 15:22:20 crc kubenswrapper[4796]: I1125 15:22:20.183665 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"d05bce62cdff9f1975558d9a6547b9599dacc5553f38f0741095e09998e9b591"} Nov 25 15:22:20 crc kubenswrapper[4796]: I1125 15:22:20.184176 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee"} Nov 25 15:22:20 crc kubenswrapper[4796]: I1125 15:22:20.184217 4796 scope.go:117] "RemoveContainer" containerID="59a444b7012da3cdcea08de5e64c603f7c25ee226f1b9192e3b9928f0badbba5" Nov 25 15:22:48 crc kubenswrapper[4796]: I1125 15:22:48.965245 4796 scope.go:117] "RemoveContainer" containerID="3176a0fbbeb81a805fab5f4166625d786dfcc039b94d04ce6efd92efbc5f6865" Nov 25 15:22:48 crc kubenswrapper[4796]: I1125 15:22:48.994234 4796 scope.go:117] "RemoveContainer" containerID="301e22b48bb1e850897458e34bdf07751827eee8ccdde7c486e8d0afa7e6a3e6" Nov 25 15:22:49 crc kubenswrapper[4796]: I1125 15:22:49.018834 4796 scope.go:117] "RemoveContainer" containerID="e5b1e4640402a156a60dde08b7a51051bab6a62514c5c57916d4cf7593b57c9d" Nov 25 15:24:19 crc kubenswrapper[4796]: I1125 15:24:19.513965 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:24:19 crc kubenswrapper[4796]: I1125 15:24:19.514537 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:24:49 crc kubenswrapper[4796]: I1125 15:24:49.514632 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:24:49 crc kubenswrapper[4796]: I1125 15:24:49.515903 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:24:58 crc kubenswrapper[4796]: I1125 15:24:58.875980 4796 generic.go:334] "Generic (PLEG): container finished" podID="6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" containerID="b8dc38954f622176b2e8914db7e48653c211deb6e5ecb277d5e8a6154c085740" exitCode=0 Nov 25 15:24:58 crc kubenswrapper[4796]: I1125 15:24:58.876077 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6","Type":"ContainerDied","Data":"b8dc38954f622176b2e8914db7e48653c211deb6e5ecb277d5e8a6154c085740"} Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.342506 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.459190 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.459279 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-config-data\") pod \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.459497 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config\") pod \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.459619 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-temporary\") pod \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.459728 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cd9r\" (UniqueName: \"kubernetes.io/projected/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-kube-api-access-2cd9r\") pod \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.459795 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config-secret\") pod \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.459826 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ca-certs\") pod \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.459867 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-workdir\") pod \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.459887 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ssh-key\") pod \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\" (UID: \"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6\") " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.460927 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" (UID: "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.461315 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-config-data" (OuterVolumeSpecName: "config-data") pod "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" (UID: "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.465146 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" (UID: "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.465835 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-kube-api-access-2cd9r" (OuterVolumeSpecName: "kube-api-access-2cd9r") pod "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" (UID: "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6"). InnerVolumeSpecName "kube-api-access-2cd9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.468001 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" (UID: "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.489326 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" (UID: "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.501261 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" (UID: "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.530813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" (UID: "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.549159 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" (UID: "6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.562660 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cd9r\" (UniqueName: \"kubernetes.io/projected/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-kube-api-access-2cd9r\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.562700 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.562713 4796 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.562725 4796 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.562740 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.562767 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.562784 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.562798 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.562811 4796 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.591367 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.664632 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.901872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6","Type":"ContainerDied","Data":"56b6ae5c9c523e3936ef637fa7ffb3e3f796d34e001b197b030833ec1d1cf0ee"} Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.902381 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b6ae5c9c523e3936ef637fa7ffb3e3f796d34e001b197b030833ec1d1cf0ee" Nov 25 15:25:00 crc kubenswrapper[4796]: I1125 15:25:00.902504 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.539226 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 15:25:06 crc kubenswrapper[4796]: E1125 15:25:06.540374 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerName="extract-utilities" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.540401 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerName="extract-utilities" Nov 25 15:25:06 crc kubenswrapper[4796]: E1125 15:25:06.540433 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerName="extract-utilities" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.540445 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerName="extract-utilities" Nov 25 15:25:06 crc kubenswrapper[4796]: E1125 15:25:06.540478 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerName="registry-server" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.540490 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerName="registry-server" Nov 25 15:25:06 crc kubenswrapper[4796]: E1125 15:25:06.540521 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerName="extract-content" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.540531 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerName="extract-content" Nov 25 15:25:06 crc kubenswrapper[4796]: E1125 15:25:06.540565 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" containerName="tempest-tests-tempest-tests-runner" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.540603 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" containerName="tempest-tests-tempest-tests-runner" Nov 25 15:25:06 crc kubenswrapper[4796]: E1125 15:25:06.540632 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerName="registry-server" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.540644 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerName="registry-server" Nov 25 15:25:06 crc kubenswrapper[4796]: E1125 15:25:06.540665 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerName="extract-content" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.540676 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerName="extract-content" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.540977 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a58268ef-bc08-43ce-a11a-dda18dbe317e" containerName="registry-server" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.540999 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdb5f1e-90ba-4f2a-9515-61c103291ec8" containerName="registry-server" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.541029 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6" containerName="tempest-tests-tempest-tests-runner" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.542011 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.546081 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-s6pct" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.559825 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.688186 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"99ad25b2-341c-43c5-a15a-12b70e1711b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.688516 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6b2p\" (UniqueName: \"kubernetes.io/projected/99ad25b2-341c-43c5-a15a-12b70e1711b3-kube-api-access-b6b2p\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"99ad25b2-341c-43c5-a15a-12b70e1711b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.790537 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6b2p\" (UniqueName: \"kubernetes.io/projected/99ad25b2-341c-43c5-a15a-12b70e1711b3-kube-api-access-b6b2p\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"99ad25b2-341c-43c5-a15a-12b70e1711b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.790663 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"99ad25b2-341c-43c5-a15a-12b70e1711b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.791273 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"99ad25b2-341c-43c5-a15a-12b70e1711b3\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.821044 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"99ad25b2-341c-43c5-a15a-12b70e1711b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.821814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6b2p\" (UniqueName: \"kubernetes.io/projected/99ad25b2-341c-43c5-a15a-12b70e1711b3-kube-api-access-b6b2p\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"99ad25b2-341c-43c5-a15a-12b70e1711b3\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:25:06 crc kubenswrapper[4796]: I1125 15:25:06.868215 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:25:07 crc kubenswrapper[4796]: I1125 15:25:07.321987 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 15:25:07 crc kubenswrapper[4796]: I1125 15:25:07.324242 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:25:07 crc kubenswrapper[4796]: I1125 15:25:07.995609 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"99ad25b2-341c-43c5-a15a-12b70e1711b3","Type":"ContainerStarted","Data":"3ef37b0d0a49225e3a62cce36e1248a00688952c8ae5e69f8b9e5d6ae1b6fa5e"} Nov 25 15:25:09 crc kubenswrapper[4796]: I1125 15:25:09.006485 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"99ad25b2-341c-43c5-a15a-12b70e1711b3","Type":"ContainerStarted","Data":"d2cbc67cab10d326b27f32d1821494ea29f66ce3164f9279a2eb22aaab82e96c"} Nov 25 15:25:09 crc kubenswrapper[4796]: I1125 15:25:09.038398 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.079227655 podStartE2EDuration="3.038370485s" podCreationTimestamp="2025-11-25 15:25:06 +0000 UTC" firstStartedPulling="2025-11-25 15:25:07.323938645 +0000 UTC m=+3635.667048059" lastFinishedPulling="2025-11-25 15:25:08.283081445 +0000 UTC m=+3636.626190889" observedRunningTime="2025-11-25 15:25:09.026674971 +0000 UTC m=+3637.369784405" watchObservedRunningTime="2025-11-25 15:25:09.038370485 +0000 UTC m=+3637.381479949" Nov 25 15:25:19 crc kubenswrapper[4796]: I1125 15:25:19.514132 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:25:19 crc kubenswrapper[4796]: I1125 15:25:19.516853 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:25:19 crc kubenswrapper[4796]: I1125 15:25:19.517096 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 15:25:19 crc kubenswrapper[4796]: I1125 15:25:19.518689 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:25:19 crc kubenswrapper[4796]: I1125 15:25:19.519016 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" gracePeriod=600 Nov 25 15:25:19 crc kubenswrapper[4796]: E1125 15:25:19.652899 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:25:20 crc kubenswrapper[4796]: I1125 15:25:20.137280 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" exitCode=0 Nov 25 15:25:20 crc kubenswrapper[4796]: I1125 15:25:20.137357 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee"} Nov 25 15:25:20 crc kubenswrapper[4796]: I1125 15:25:20.138276 4796 scope.go:117] "RemoveContainer" containerID="d05bce62cdff9f1975558d9a6547b9599dacc5553f38f0741095e09998e9b591" Nov 25 15:25:20 crc kubenswrapper[4796]: I1125 15:25:20.139281 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:25:20 crc kubenswrapper[4796]: E1125 15:25:20.140007 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:25:31 crc kubenswrapper[4796]: I1125 15:25:31.410203 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:25:31 crc kubenswrapper[4796]: E1125 15:25:31.411652 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:25:31 crc kubenswrapper[4796]: I1125 15:25:31.839275 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mw55g/must-gather-nlpgl"] Nov 25 15:25:31 crc kubenswrapper[4796]: I1125 15:25:31.841244 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:25:31 crc kubenswrapper[4796]: I1125 15:25:31.844175 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mw55g"/"default-dockercfg-9sx4j" Nov 25 15:25:31 crc kubenswrapper[4796]: I1125 15:25:31.844341 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mw55g"/"openshift-service-ca.crt" Nov 25 15:25:31 crc kubenswrapper[4796]: I1125 15:25:31.844465 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mw55g"/"kube-root-ca.crt" Nov 25 15:25:31 crc kubenswrapper[4796]: I1125 15:25:31.854330 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mw55g/must-gather-nlpgl"] Nov 25 15:25:31 crc kubenswrapper[4796]: I1125 15:25:31.958661 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctnf8\" (UniqueName: \"kubernetes.io/projected/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-kube-api-access-ctnf8\") pod \"must-gather-nlpgl\" (UID: \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\") " pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:25:31 crc kubenswrapper[4796]: I1125 15:25:31.959267 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-must-gather-output\") pod \"must-gather-nlpgl\" (UID: \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\") " pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:25:32 crc kubenswrapper[4796]: I1125 15:25:32.062411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-must-gather-output\") pod \"must-gather-nlpgl\" (UID: \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\") " pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:25:32 crc kubenswrapper[4796]: I1125 15:25:32.062822 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctnf8\" (UniqueName: \"kubernetes.io/projected/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-kube-api-access-ctnf8\") pod \"must-gather-nlpgl\" (UID: \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\") " pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:25:32 crc kubenswrapper[4796]: I1125 15:25:32.063282 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-must-gather-output\") pod \"must-gather-nlpgl\" (UID: \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\") " pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:25:32 crc kubenswrapper[4796]: I1125 15:25:32.087476 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctnf8\" (UniqueName: \"kubernetes.io/projected/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-kube-api-access-ctnf8\") pod \"must-gather-nlpgl\" (UID: \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\") " pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:25:32 crc kubenswrapper[4796]: I1125 15:25:32.159996 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:25:32 crc kubenswrapper[4796]: I1125 15:25:32.621797 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mw55g/must-gather-nlpgl"] Nov 25 15:25:32 crc kubenswrapper[4796]: W1125 15:25:32.631767 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22e13480_3aaf_4df3_8c30_1cd8f2b33e55.slice/crio-1eaad81798e6d369546b33c9deeaee4e104663df6f507083b6f6c37ece394e61 WatchSource:0}: Error finding container 1eaad81798e6d369546b33c9deeaee4e104663df6f507083b6f6c37ece394e61: Status 404 returned error can't find the container with id 1eaad81798e6d369546b33c9deeaee4e104663df6f507083b6f6c37ece394e61 Nov 25 15:25:33 crc kubenswrapper[4796]: I1125 15:25:33.309740 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/must-gather-nlpgl" event={"ID":"22e13480-3aaf-4df3-8c30-1cd8f2b33e55","Type":"ContainerStarted","Data":"1eaad81798e6d369546b33c9deeaee4e104663df6f507083b6f6c37ece394e61"} Nov 25 15:25:42 crc kubenswrapper[4796]: I1125 15:25:42.421133 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/must-gather-nlpgl" event={"ID":"22e13480-3aaf-4df3-8c30-1cd8f2b33e55","Type":"ContainerStarted","Data":"be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d"} Nov 25 15:25:43 crc kubenswrapper[4796]: I1125 15:25:43.429110 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/must-gather-nlpgl" event={"ID":"22e13480-3aaf-4df3-8c30-1cd8f2b33e55","Type":"ContainerStarted","Data":"6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e"} Nov 25 15:25:43 crc kubenswrapper[4796]: I1125 15:25:43.450309 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mw55g/must-gather-nlpgl" podStartSLOduration=3.187208498 podStartE2EDuration="12.450291397s" podCreationTimestamp="2025-11-25 15:25:31 +0000 UTC" firstStartedPulling="2025-11-25 15:25:32.633621965 +0000 UTC m=+3660.976731389" lastFinishedPulling="2025-11-25 15:25:41.896704824 +0000 UTC m=+3670.239814288" observedRunningTime="2025-11-25 15:25:43.44562227 +0000 UTC m=+3671.788731704" watchObservedRunningTime="2025-11-25 15:25:43.450291397 +0000 UTC m=+3671.793400821" Nov 25 15:25:44 crc kubenswrapper[4796]: I1125 15:25:44.410063 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:25:44 crc kubenswrapper[4796]: E1125 15:25:44.410339 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:25:47 crc kubenswrapper[4796]: I1125 15:25:47.493976 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mw55g/crc-debug-vqbm6"] Nov 25 15:25:47 crc kubenswrapper[4796]: I1125 15:25:47.496749 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:25:47 crc kubenswrapper[4796]: I1125 15:25:47.597216 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjsms\" (UniqueName: \"kubernetes.io/projected/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-kube-api-access-sjsms\") pod \"crc-debug-vqbm6\" (UID: \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\") " pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:25:47 crc kubenswrapper[4796]: I1125 15:25:47.597354 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-host\") pod \"crc-debug-vqbm6\" (UID: \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\") " pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:25:47 crc kubenswrapper[4796]: I1125 15:25:47.699652 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-host\") pod \"crc-debug-vqbm6\" (UID: \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\") " pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:25:47 crc kubenswrapper[4796]: I1125 15:25:47.699784 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-host\") pod \"crc-debug-vqbm6\" (UID: \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\") " pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:25:47 crc kubenswrapper[4796]: I1125 15:25:47.699826 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjsms\" (UniqueName: \"kubernetes.io/projected/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-kube-api-access-sjsms\") pod \"crc-debug-vqbm6\" (UID: \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\") " pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:25:47 crc kubenswrapper[4796]: I1125 15:25:47.723331 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjsms\" (UniqueName: \"kubernetes.io/projected/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-kube-api-access-sjsms\") pod \"crc-debug-vqbm6\" (UID: \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\") " pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:25:47 crc kubenswrapper[4796]: I1125 15:25:47.816217 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:25:48 crc kubenswrapper[4796]: I1125 15:25:48.481685 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/crc-debug-vqbm6" event={"ID":"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8","Type":"ContainerStarted","Data":"866f02928c5eba7a68c1a332be16f64d641cf7aa816f005abeca646164f1a52d"} Nov 25 15:25:55 crc kubenswrapper[4796]: I1125 15:25:55.409299 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:25:55 crc kubenswrapper[4796]: E1125 15:25:55.410117 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:25:59 crc kubenswrapper[4796]: I1125 15:25:59.578728 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/crc-debug-vqbm6" event={"ID":"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8","Type":"ContainerStarted","Data":"f682ac403121b2aba75bed1c4e18af69bf163d144b0d98c8e0f99754c1ee0da4"} Nov 25 15:25:59 crc kubenswrapper[4796]: I1125 15:25:59.602466 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mw55g/crc-debug-vqbm6" podStartSLOduration=1.364866738 podStartE2EDuration="12.602446235s" podCreationTimestamp="2025-11-25 15:25:47 +0000 UTC" firstStartedPulling="2025-11-25 15:25:47.883548053 +0000 UTC m=+3676.226657467" lastFinishedPulling="2025-11-25 15:25:59.12112754 +0000 UTC m=+3687.464236964" observedRunningTime="2025-11-25 15:25:59.590507512 +0000 UTC m=+3687.933616936" watchObservedRunningTime="2025-11-25 15:25:59.602446235 +0000 UTC m=+3687.945555669" Nov 25 15:26:10 crc kubenswrapper[4796]: I1125 15:26:10.409805 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:26:10 crc kubenswrapper[4796]: E1125 15:26:10.411736 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:26:22 crc kubenswrapper[4796]: I1125 15:26:22.419349 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:26:22 crc kubenswrapper[4796]: E1125 15:26:22.420624 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.013028 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s7dmr"] Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.015749 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.046014 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7dmr"] Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.207468 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7052ec-4340-472c-8add-94483920eeac-catalog-content\") pod \"community-operators-s7dmr\" (UID: \"7d7052ec-4340-472c-8add-94483920eeac\") " pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.207538 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr4l2\" (UniqueName: \"kubernetes.io/projected/7d7052ec-4340-472c-8add-94483920eeac-kube-api-access-cr4l2\") pod \"community-operators-s7dmr\" (UID: \"7d7052ec-4340-472c-8add-94483920eeac\") " pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.207651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7052ec-4340-472c-8add-94483920eeac-utilities\") pod \"community-operators-s7dmr\" (UID: \"7d7052ec-4340-472c-8add-94483920eeac\") " pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.309487 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7052ec-4340-472c-8add-94483920eeac-utilities\") pod \"community-operators-s7dmr\" (UID: \"7d7052ec-4340-472c-8add-94483920eeac\") " pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.309648 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7052ec-4340-472c-8add-94483920eeac-catalog-content\") pod \"community-operators-s7dmr\" (UID: \"7d7052ec-4340-472c-8add-94483920eeac\") " pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.309694 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr4l2\" (UniqueName: \"kubernetes.io/projected/7d7052ec-4340-472c-8add-94483920eeac-kube-api-access-cr4l2\") pod \"community-operators-s7dmr\" (UID: \"7d7052ec-4340-472c-8add-94483920eeac\") " pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.310159 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7052ec-4340-472c-8add-94483920eeac-utilities\") pod \"community-operators-s7dmr\" (UID: \"7d7052ec-4340-472c-8add-94483920eeac\") " pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.310674 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7052ec-4340-472c-8add-94483920eeac-catalog-content\") pod \"community-operators-s7dmr\" (UID: \"7d7052ec-4340-472c-8add-94483920eeac\") " pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.343712 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr4l2\" (UniqueName: \"kubernetes.io/projected/7d7052ec-4340-472c-8add-94483920eeac-kube-api-access-cr4l2\") pod \"community-operators-s7dmr\" (UID: \"7d7052ec-4340-472c-8add-94483920eeac\") " pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.347342 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:25 crc kubenswrapper[4796]: I1125 15:26:25.915366 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7dmr"] Nov 25 15:26:26 crc kubenswrapper[4796]: I1125 15:26:26.858779 4796 generic.go:334] "Generic (PLEG): container finished" podID="7d7052ec-4340-472c-8add-94483920eeac" containerID="64a43b493eb014b0eeccf60fd66a0ff899a562bc896fee01101c24298f3adf05" exitCode=0 Nov 25 15:26:26 crc kubenswrapper[4796]: I1125 15:26:26.858900 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7dmr" event={"ID":"7d7052ec-4340-472c-8add-94483920eeac","Type":"ContainerDied","Data":"64a43b493eb014b0eeccf60fd66a0ff899a562bc896fee01101c24298f3adf05"} Nov 25 15:26:26 crc kubenswrapper[4796]: I1125 15:26:26.859138 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7dmr" event={"ID":"7d7052ec-4340-472c-8add-94483920eeac","Type":"ContainerStarted","Data":"39141615c929c024575f9df282e2c7d2c53ca03a8f65f46b0796e6965f26af12"} Nov 25 15:26:33 crc kubenswrapper[4796]: I1125 15:26:33.945264 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7dmr" event={"ID":"7d7052ec-4340-472c-8add-94483920eeac","Type":"ContainerStarted","Data":"704997ab6a4a18c0ffc91c8c671ea3969b209c73113e2476e9f92accac66a1fb"} Nov 25 15:26:34 crc kubenswrapper[4796]: I1125 15:26:34.959145 4796 generic.go:334] "Generic (PLEG): container finished" podID="7d7052ec-4340-472c-8add-94483920eeac" containerID="704997ab6a4a18c0ffc91c8c671ea3969b209c73113e2476e9f92accac66a1fb" exitCode=0 Nov 25 15:26:34 crc kubenswrapper[4796]: I1125 15:26:34.959436 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7dmr" event={"ID":"7d7052ec-4340-472c-8add-94483920eeac","Type":"ContainerDied","Data":"704997ab6a4a18c0ffc91c8c671ea3969b209c73113e2476e9f92accac66a1fb"} Nov 25 15:26:35 crc kubenswrapper[4796]: I1125 15:26:35.409955 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:26:35 crc kubenswrapper[4796]: E1125 15:26:35.410715 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:26:36 crc kubenswrapper[4796]: I1125 15:26:36.982247 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7dmr" event={"ID":"7d7052ec-4340-472c-8add-94483920eeac","Type":"ContainerStarted","Data":"a01dacbf5d95655b6f8862a102a967837dda511dde26ec61c58b6be41318238b"} Nov 25 15:26:37 crc kubenswrapper[4796]: I1125 15:26:37.005145 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s7dmr" podStartSLOduration=3.980617688 podStartE2EDuration="13.005119808s" podCreationTimestamp="2025-11-25 15:26:24 +0000 UTC" firstStartedPulling="2025-11-25 15:26:26.862592591 +0000 UTC m=+3715.205702015" lastFinishedPulling="2025-11-25 15:26:35.887094701 +0000 UTC m=+3724.230204135" observedRunningTime="2025-11-25 15:26:36.999602366 +0000 UTC m=+3725.342711820" watchObservedRunningTime="2025-11-25 15:26:37.005119808 +0000 UTC m=+3725.348229232" Nov 25 15:26:42 crc kubenswrapper[4796]: I1125 15:26:42.041943 4796 generic.go:334] "Generic (PLEG): container finished" podID="9069b5d9-4437-4d3d-92a2-a2ba7a90aab8" containerID="f682ac403121b2aba75bed1c4e18af69bf163d144b0d98c8e0f99754c1ee0da4" exitCode=0 Nov 25 15:26:42 crc kubenswrapper[4796]: I1125 15:26:42.042009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/crc-debug-vqbm6" event={"ID":"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8","Type":"ContainerDied","Data":"f682ac403121b2aba75bed1c4e18af69bf163d144b0d98c8e0f99754c1ee0da4"} Nov 25 15:26:43 crc kubenswrapper[4796]: I1125 15:26:43.179603 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:26:43 crc kubenswrapper[4796]: I1125 15:26:43.232127 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mw55g/crc-debug-vqbm6"] Nov 25 15:26:43 crc kubenswrapper[4796]: I1125 15:26:43.242321 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mw55g/crc-debug-vqbm6"] Nov 25 15:26:43 crc kubenswrapper[4796]: I1125 15:26:43.294606 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-host\") pod \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\" (UID: \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\") " Nov 25 15:26:43 crc kubenswrapper[4796]: I1125 15:26:43.294713 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-host" (OuterVolumeSpecName: "host") pod "9069b5d9-4437-4d3d-92a2-a2ba7a90aab8" (UID: "9069b5d9-4437-4d3d-92a2-a2ba7a90aab8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:26:43 crc kubenswrapper[4796]: I1125 15:26:43.294904 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjsms\" (UniqueName: \"kubernetes.io/projected/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-kube-api-access-sjsms\") pod \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\" (UID: \"9069b5d9-4437-4d3d-92a2-a2ba7a90aab8\") " Nov 25 15:26:43 crc kubenswrapper[4796]: I1125 15:26:43.296242 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-host\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:43 crc kubenswrapper[4796]: I1125 15:26:43.302190 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-kube-api-access-sjsms" (OuterVolumeSpecName: "kube-api-access-sjsms") pod "9069b5d9-4437-4d3d-92a2-a2ba7a90aab8" (UID: "9069b5d9-4437-4d3d-92a2-a2ba7a90aab8"). InnerVolumeSpecName "kube-api-access-sjsms". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:26:43 crc kubenswrapper[4796]: I1125 15:26:43.398307 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjsms\" (UniqueName: \"kubernetes.io/projected/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8-kube-api-access-sjsms\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.069940 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="866f02928c5eba7a68c1a332be16f64d641cf7aa816f005abeca646164f1a52d" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.070039 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-vqbm6" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.427488 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9069b5d9-4437-4d3d-92a2-a2ba7a90aab8" path="/var/lib/kubelet/pods/9069b5d9-4437-4d3d-92a2-a2ba7a90aab8/volumes" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.500918 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mw55g/crc-debug-bttbx"] Nov 25 15:26:44 crc kubenswrapper[4796]: E1125 15:26:44.501440 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9069b5d9-4437-4d3d-92a2-a2ba7a90aab8" containerName="container-00" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.501464 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9069b5d9-4437-4d3d-92a2-a2ba7a90aab8" containerName="container-00" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.501723 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9069b5d9-4437-4d3d-92a2-a2ba7a90aab8" containerName="container-00" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.502503 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.625945 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr5ht\" (UniqueName: \"kubernetes.io/projected/4e3d2d2e-dd57-4619-9a0c-637963f73b62-kube-api-access-xr5ht\") pod \"crc-debug-bttbx\" (UID: \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\") " pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.626102 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e3d2d2e-dd57-4619-9a0c-637963f73b62-host\") pod \"crc-debug-bttbx\" (UID: \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\") " pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.728674 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e3d2d2e-dd57-4619-9a0c-637963f73b62-host\") pod \"crc-debug-bttbx\" (UID: \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\") " pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.728919 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr5ht\" (UniqueName: \"kubernetes.io/projected/4e3d2d2e-dd57-4619-9a0c-637963f73b62-kube-api-access-xr5ht\") pod \"crc-debug-bttbx\" (UID: \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\") " pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.728925 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e3d2d2e-dd57-4619-9a0c-637963f73b62-host\") pod \"crc-debug-bttbx\" (UID: \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\") " pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.749807 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr5ht\" (UniqueName: \"kubernetes.io/projected/4e3d2d2e-dd57-4619-9a0c-637963f73b62-kube-api-access-xr5ht\") pod \"crc-debug-bttbx\" (UID: \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\") " pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:44 crc kubenswrapper[4796]: I1125 15:26:44.821531 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:45 crc kubenswrapper[4796]: I1125 15:26:45.080063 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/crc-debug-bttbx" event={"ID":"4e3d2d2e-dd57-4619-9a0c-637963f73b62","Type":"ContainerStarted","Data":"eed947eb4abfcb9c9c3717c729794694cbef069d68e6064c0441ae5d708646a3"} Nov 25 15:26:45 crc kubenswrapper[4796]: I1125 15:26:45.348447 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:45 crc kubenswrapper[4796]: I1125 15:26:45.348561 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:45 crc kubenswrapper[4796]: I1125 15:26:45.405783 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:46 crc kubenswrapper[4796]: I1125 15:26:46.092661 4796 generic.go:334] "Generic (PLEG): container finished" podID="4e3d2d2e-dd57-4619-9a0c-637963f73b62" containerID="23328013a085e6f26fa2786aee1da4387a9d1d170ef46f62095cc26ea9cc9cca" exitCode=0 Nov 25 15:26:46 crc kubenswrapper[4796]: I1125 15:26:46.092736 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/crc-debug-bttbx" event={"ID":"4e3d2d2e-dd57-4619-9a0c-637963f73b62","Type":"ContainerDied","Data":"23328013a085e6f26fa2786aee1da4387a9d1d170ef46f62095cc26ea9cc9cca"} Nov 25 15:26:46 crc kubenswrapper[4796]: I1125 15:26:46.172316 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s7dmr" Nov 25 15:26:46 crc kubenswrapper[4796]: I1125 15:26:46.285654 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7dmr"] Nov 25 15:26:46 crc kubenswrapper[4796]: I1125 15:26:46.512020 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r65qw"] Nov 25 15:26:46 crc kubenswrapper[4796]: I1125 15:26:46.622445 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mw55g/crc-debug-bttbx"] Nov 25 15:26:46 crc kubenswrapper[4796]: I1125 15:26:46.631782 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mw55g/crc-debug-bttbx"] Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.104485 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r65qw" podUID="19662a6d-c366-4a79-9301-2a474d54792f" containerName="registry-server" containerID="cri-o://83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f" gracePeriod=2 Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.341098 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.493751 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr5ht\" (UniqueName: \"kubernetes.io/projected/4e3d2d2e-dd57-4619-9a0c-637963f73b62-kube-api-access-xr5ht\") pod \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\" (UID: \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\") " Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.494523 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e3d2d2e-dd57-4619-9a0c-637963f73b62-host\") pod \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\" (UID: \"4e3d2d2e-dd57-4619-9a0c-637963f73b62\") " Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.494709 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e3d2d2e-dd57-4619-9a0c-637963f73b62-host" (OuterVolumeSpecName: "host") pod "4e3d2d2e-dd57-4619-9a0c-637963f73b62" (UID: "4e3d2d2e-dd57-4619-9a0c-637963f73b62"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.495724 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4e3d2d2e-dd57-4619-9a0c-637963f73b62-host\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.522769 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3d2d2e-dd57-4619-9a0c-637963f73b62-kube-api-access-xr5ht" (OuterVolumeSpecName: "kube-api-access-xr5ht") pod "4e3d2d2e-dd57-4619-9a0c-637963f73b62" (UID: "4e3d2d2e-dd57-4619-9a0c-637963f73b62"). InnerVolumeSpecName "kube-api-access-xr5ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.597651 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr5ht\" (UniqueName: \"kubernetes.io/projected/4e3d2d2e-dd57-4619-9a0c-637963f73b62-kube-api-access-xr5ht\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.624825 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.698727 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr54k\" (UniqueName: \"kubernetes.io/projected/19662a6d-c366-4a79-9301-2a474d54792f-kube-api-access-hr54k\") pod \"19662a6d-c366-4a79-9301-2a474d54792f\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.698976 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-catalog-content\") pod \"19662a6d-c366-4a79-9301-2a474d54792f\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.699034 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-utilities\") pod \"19662a6d-c366-4a79-9301-2a474d54792f\" (UID: \"19662a6d-c366-4a79-9301-2a474d54792f\") " Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.705655 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-utilities" (OuterVolumeSpecName: "utilities") pod "19662a6d-c366-4a79-9301-2a474d54792f" (UID: "19662a6d-c366-4a79-9301-2a474d54792f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.710855 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19662a6d-c366-4a79-9301-2a474d54792f-kube-api-access-hr54k" (OuterVolumeSpecName: "kube-api-access-hr54k") pod "19662a6d-c366-4a79-9301-2a474d54792f" (UID: "19662a6d-c366-4a79-9301-2a474d54792f"). InnerVolumeSpecName "kube-api-access-hr54k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.775190 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19662a6d-c366-4a79-9301-2a474d54792f" (UID: "19662a6d-c366-4a79-9301-2a474d54792f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.803539 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr54k\" (UniqueName: \"kubernetes.io/projected/19662a6d-c366-4a79-9301-2a474d54792f-kube-api-access-hr54k\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.803608 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.803624 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19662a6d-c366-4a79-9301-2a474d54792f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.972120 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mw55g/crc-debug-lbp8d"] Nov 25 15:26:47 crc kubenswrapper[4796]: E1125 15:26:47.972533 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19662a6d-c366-4a79-9301-2a474d54792f" containerName="registry-server" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.972550 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="19662a6d-c366-4a79-9301-2a474d54792f" containerName="registry-server" Nov 25 15:26:47 crc kubenswrapper[4796]: E1125 15:26:47.972587 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3d2d2e-dd57-4619-9a0c-637963f73b62" containerName="container-00" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.972595 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3d2d2e-dd57-4619-9a0c-637963f73b62" containerName="container-00" Nov 25 15:26:47 crc kubenswrapper[4796]: E1125 15:26:47.972614 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19662a6d-c366-4a79-9301-2a474d54792f" containerName="extract-utilities" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.972622 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="19662a6d-c366-4a79-9301-2a474d54792f" containerName="extract-utilities" Nov 25 15:26:47 crc kubenswrapper[4796]: E1125 15:26:47.972641 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19662a6d-c366-4a79-9301-2a474d54792f" containerName="extract-content" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.972647 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="19662a6d-c366-4a79-9301-2a474d54792f" containerName="extract-content" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.972810 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="19662a6d-c366-4a79-9301-2a474d54792f" containerName="registry-server" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.972835 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3d2d2e-dd57-4619-9a0c-637963f73b62" containerName="container-00" Nov 25 15:26:47 crc kubenswrapper[4796]: I1125 15:26:47.973501 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.109672 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmqpd\" (UniqueName: \"kubernetes.io/projected/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-kube-api-access-hmqpd\") pod \"crc-debug-lbp8d\" (UID: \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\") " pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.109895 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-host\") pod \"crc-debug-lbp8d\" (UID: \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\") " pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.122922 4796 generic.go:334] "Generic (PLEG): container finished" podID="19662a6d-c366-4a79-9301-2a474d54792f" containerID="83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f" exitCode=0 Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.122986 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r65qw" event={"ID":"19662a6d-c366-4a79-9301-2a474d54792f","Type":"ContainerDied","Data":"83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f"} Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.123014 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r65qw" event={"ID":"19662a6d-c366-4a79-9301-2a474d54792f","Type":"ContainerDied","Data":"debe79669088fcd6c4da0ad859a7eb835209ea6e682625760330e3d2208b746a"} Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.123033 4796 scope.go:117] "RemoveContainer" containerID="83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.123159 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r65qw" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.130089 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-bttbx" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.131780 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed947eb4abfcb9c9c3717c729794694cbef069d68e6064c0441ae5d708646a3" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.213091 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmqpd\" (UniqueName: \"kubernetes.io/projected/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-kube-api-access-hmqpd\") pod \"crc-debug-lbp8d\" (UID: \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\") " pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.213232 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-host\") pod \"crc-debug-lbp8d\" (UID: \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\") " pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.213420 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-host\") pod \"crc-debug-lbp8d\" (UID: \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\") " pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.216085 4796 scope.go:117] "RemoveContainer" containerID="5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.251761 4796 scope.go:117] "RemoveContainer" containerID="9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.254455 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmqpd\" (UniqueName: \"kubernetes.io/projected/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-kube-api-access-hmqpd\") pod \"crc-debug-lbp8d\" (UID: \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\") " pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.257104 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r65qw"] Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.265700 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r65qw"] Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.293818 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.366337 4796 scope.go:117] "RemoveContainer" containerID="83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f" Nov 25 15:26:48 crc kubenswrapper[4796]: E1125 15:26:48.366813 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f\": container with ID starting with 83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f not found: ID does not exist" containerID="83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.366851 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f"} err="failed to get container status \"83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f\": rpc error: code = NotFound desc = could not find container \"83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f\": container with ID starting with 83180035d637507a0c8eb8a5ad1060f9e41db214039d2b985d16d116e7ddab6f not found: ID does not exist" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.366883 4796 scope.go:117] "RemoveContainer" containerID="5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f" Nov 25 15:26:48 crc kubenswrapper[4796]: E1125 15:26:48.367224 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f\": container with ID starting with 5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f not found: ID does not exist" containerID="5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.367262 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f"} err="failed to get container status \"5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f\": rpc error: code = NotFound desc = could not find container \"5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f\": container with ID starting with 5e4dc361047257588f327e5d780c12c4caa8014b957722e0258e0f53b49d026f not found: ID does not exist" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.367280 4796 scope.go:117] "RemoveContainer" containerID="9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197" Nov 25 15:26:48 crc kubenswrapper[4796]: E1125 15:26:48.367855 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197\": container with ID starting with 9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197 not found: ID does not exist" containerID="9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.367914 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197"} err="failed to get container status \"9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197\": rpc error: code = NotFound desc = could not find container \"9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197\": container with ID starting with 9e5783f25dc3fafa6158a1d5eb5ec67d7ed3ee0ab7a56cccd4d7e5b5379d3197 not found: ID does not exist" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.443781 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19662a6d-c366-4a79-9301-2a474d54792f" path="/var/lib/kubelet/pods/19662a6d-c366-4a79-9301-2a474d54792f/volumes" Nov 25 15:26:48 crc kubenswrapper[4796]: I1125 15:26:48.445140 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3d2d2e-dd57-4619-9a0c-637963f73b62" path="/var/lib/kubelet/pods/4e3d2d2e-dd57-4619-9a0c-637963f73b62/volumes" Nov 25 15:26:49 crc kubenswrapper[4796]: I1125 15:26:49.139489 4796 generic.go:334] "Generic (PLEG): container finished" podID="ca2fcb24-010d-49f0-85a0-bd78ad0b021c" containerID="9312ea520ba5f39b096fc187e50273c2fa8c553077f0214505dee69cbc2f3afb" exitCode=0 Nov 25 15:26:49 crc kubenswrapper[4796]: I1125 15:26:49.139597 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/crc-debug-lbp8d" event={"ID":"ca2fcb24-010d-49f0-85a0-bd78ad0b021c","Type":"ContainerDied","Data":"9312ea520ba5f39b096fc187e50273c2fa8c553077f0214505dee69cbc2f3afb"} Nov 25 15:26:49 crc kubenswrapper[4796]: I1125 15:26:49.139672 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/crc-debug-lbp8d" event={"ID":"ca2fcb24-010d-49f0-85a0-bd78ad0b021c","Type":"ContainerStarted","Data":"b203ef720920be1aad6c5805a36fa5d567cb90e8dc128969f334c943a7b4a746"} Nov 25 15:26:49 crc kubenswrapper[4796]: I1125 15:26:49.190568 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mw55g/crc-debug-lbp8d"] Nov 25 15:26:49 crc kubenswrapper[4796]: I1125 15:26:49.217023 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mw55g/crc-debug-lbp8d"] Nov 25 15:26:49 crc kubenswrapper[4796]: I1125 15:26:49.410844 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:26:49 crc kubenswrapper[4796]: E1125 15:26:49.411052 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:26:50 crc kubenswrapper[4796]: I1125 15:26:50.256418 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:26:50 crc kubenswrapper[4796]: I1125 15:26:50.363942 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-host\") pod \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\" (UID: \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\") " Nov 25 15:26:50 crc kubenswrapper[4796]: I1125 15:26:50.364028 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-host" (OuterVolumeSpecName: "host") pod "ca2fcb24-010d-49f0-85a0-bd78ad0b021c" (UID: "ca2fcb24-010d-49f0-85a0-bd78ad0b021c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:26:50 crc kubenswrapper[4796]: I1125 15:26:50.364211 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmqpd\" (UniqueName: \"kubernetes.io/projected/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-kube-api-access-hmqpd\") pod \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\" (UID: \"ca2fcb24-010d-49f0-85a0-bd78ad0b021c\") " Nov 25 15:26:50 crc kubenswrapper[4796]: I1125 15:26:50.364802 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-host\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:50 crc kubenswrapper[4796]: I1125 15:26:50.369486 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-kube-api-access-hmqpd" (OuterVolumeSpecName: "kube-api-access-hmqpd") pod "ca2fcb24-010d-49f0-85a0-bd78ad0b021c" (UID: "ca2fcb24-010d-49f0-85a0-bd78ad0b021c"). InnerVolumeSpecName "kube-api-access-hmqpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:26:50 crc kubenswrapper[4796]: I1125 15:26:50.423182 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2fcb24-010d-49f0-85a0-bd78ad0b021c" path="/var/lib/kubelet/pods/ca2fcb24-010d-49f0-85a0-bd78ad0b021c/volumes" Nov 25 15:26:50 crc kubenswrapper[4796]: I1125 15:26:50.467026 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmqpd\" (UniqueName: \"kubernetes.io/projected/ca2fcb24-010d-49f0-85a0-bd78ad0b021c-kube-api-access-hmqpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:51 crc kubenswrapper[4796]: I1125 15:26:51.161940 4796 scope.go:117] "RemoveContainer" containerID="9312ea520ba5f39b096fc187e50273c2fa8c553077f0214505dee69cbc2f3afb" Nov 25 15:26:51 crc kubenswrapper[4796]: I1125 15:26:51.162006 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/crc-debug-lbp8d" Nov 25 15:27:04 crc kubenswrapper[4796]: I1125 15:27:04.410040 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:27:04 crc kubenswrapper[4796]: E1125 15:27:04.410740 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.004297 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-648cbfbf74-5bhgn_f31c41f3-602c-427d-8728-9368c92a8d35/barbican-api/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.107165 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-648cbfbf74-5bhgn_f31c41f3-602c-427d-8728-9368c92a8d35/barbican-api-log/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.180067 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-696c6c8f78-kwfxh_71e86788-aa18-413b-aaa7-f216ef8d4f2b/barbican-keystone-listener/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.295843 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-696c6c8f78-kwfxh_71e86788-aa18-413b-aaa7-f216ef8d4f2b/barbican-keystone-listener-log/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.370818 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-847768d9dc-hdkcj_c2ea5acd-889d-439f-9295-39424d08c923/barbican-worker/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.437163 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-847768d9dc-hdkcj_c2ea5acd-889d-439f-9295-39424d08c923/barbican-worker-log/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.598932 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94_e06f3673-5956-425d-aefa-270976a3804d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.760956 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_37724a0c-3784-401a-8214-3dcb37d2ce4f/ceilometer-notification-agent/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.764487 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_37724a0c-3784-401a-8214-3dcb37d2ce4f/ceilometer-central-agent/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.818649 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_37724a0c-3784-401a-8214-3dcb37d2ce4f/proxy-httpd/0.log" Nov 25 15:27:07 crc kubenswrapper[4796]: I1125 15:27:07.863342 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_37724a0c-3784-401a-8214-3dcb37d2ce4f/sg-core/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.054478 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_213ec08a-1b84-45bb-a867-7f077f18c908/cinder-api-log/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.099184 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_213ec08a-1b84-45bb-a867-7f077f18c908/cinder-api/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.231845 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7ac2f3b3-e1cc-4536-b6b3-eacb46b887db/cinder-scheduler/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.290443 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7ac2f3b3-e1cc-4536-b6b3-eacb46b887db/probe/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.372657 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh_cb697a58-06f8-4133-bb60-109f14009dad/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.563503 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7x58g_7ee7821f-7c42-4833-bdda-e32b06b2e1b8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.599311 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-tjxqx_64408db4-ea13-40ee-b40d-ce6e489f2b82/init/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.753829 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-tjxqx_64408db4-ea13-40ee-b40d-ce6e489f2b82/init/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.773786 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-tjxqx_64408db4-ea13-40ee-b40d-ce6e489f2b82/dnsmasq-dns/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.910381 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-h678c_9c76afe2-174a-4c31-a551-101661ae546b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:08 crc kubenswrapper[4796]: I1125 15:27:08.983386 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbf103ff-9a5b-408b-b69a-9383d471a83a/glance-httpd/0.log" Nov 25 15:27:09 crc kubenswrapper[4796]: I1125 15:27:09.050227 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbf103ff-9a5b-408b-b69a-9383d471a83a/glance-log/0.log" Nov 25 15:27:09 crc kubenswrapper[4796]: I1125 15:27:09.204316 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_498b441d-79fc-4fa9-b857-72cf2f022ec9/glance-log/0.log" Nov 25 15:27:09 crc kubenswrapper[4796]: I1125 15:27:09.559482 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_498b441d-79fc-4fa9-b857-72cf2f022ec9/glance-httpd/0.log" Nov 25 15:27:09 crc kubenswrapper[4796]: I1125 15:27:09.796416 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8_552fef9f-5b94-4e45-9765-5b5e6ee62bfa/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:09 crc kubenswrapper[4796]: I1125 15:27:09.883282 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-674489f5b-nnl97_b8f52433-dd17-499e-8ac4-bda250a52460/horizon/0.log" Nov 25 15:27:10 crc kubenswrapper[4796]: I1125 15:27:10.153429 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-wvt59_e7c0033b-a387-447e-89cf-43e3a0f237d0/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:10 crc kubenswrapper[4796]: I1125 15:27:10.184179 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-674489f5b-nnl97_b8f52433-dd17-499e-8ac4-bda250a52460/horizon-log/0.log" Nov 25 15:27:10 crc kubenswrapper[4796]: I1125 15:27:10.381234 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5d994c97d7-9qxnr_47119c19-fca4-4a63-8170-d4dee8201af8/keystone-api/0.log" Nov 25 15:27:10 crc kubenswrapper[4796]: I1125 15:27:10.383872 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401381-s9lbp_72d4d931-5b18-49ad-a427-9997259fc320/keystone-cron/0.log" Nov 25 15:27:10 crc kubenswrapper[4796]: I1125 15:27:10.515276 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_da9248b8-0e46-4c9a-837c-b5591fc3e559/kube-state-metrics/0.log" Nov 25 15:27:10 crc kubenswrapper[4796]: I1125 15:27:10.650899 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb_5e5ea533-89ca-434d-bde5-0222fa319b66/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:11 crc kubenswrapper[4796]: I1125 15:27:11.149131 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt_3a001f8e-537d-4c17-88cd-b1c2a8727074/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:11 crc kubenswrapper[4796]: I1125 15:27:11.174834 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b8d7f79d9-dhp4t_d300f40d-3177-4832-9df9-b724d40b8622/neutron-httpd/0.log" Nov 25 15:27:11 crc kubenswrapper[4796]: I1125 15:27:11.255123 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b8d7f79d9-dhp4t_d300f40d-3177-4832-9df9-b724d40b8622/neutron-api/0.log" Nov 25 15:27:11 crc kubenswrapper[4796]: I1125 15:27:11.660440 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3d5bdd76-c116-469f-84a1-c869e4ffb5ce/nova-cell0-conductor-conductor/0.log" Nov 25 15:27:11 crc kubenswrapper[4796]: I1125 15:27:11.798494 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_86950200-06a3-4ad0-9a40-d70deeba8ce3/nova-api-log/0.log" Nov 25 15:27:11 crc kubenswrapper[4796]: I1125 15:27:11.965130 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_86950200-06a3-4ad0-9a40-d70deeba8ce3/nova-api-api/0.log" Nov 25 15:27:12 crc kubenswrapper[4796]: I1125 15:27:12.058082 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_614944f2-a1d3-41e0-82a4-3182bd6770af/nova-cell1-conductor-conductor/0.log" Nov 25 15:27:12 crc kubenswrapper[4796]: I1125 15:27:12.312456 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a14facfc-22d1-4b36-a006-23af447aef93/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 15:27:12 crc kubenswrapper[4796]: I1125 15:27:12.552371 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5l2zn_8c595aba-53f4-47cf-9b97-c489fb013f6e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:12 crc kubenswrapper[4796]: I1125 15:27:12.637749 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_74f7062a-bcf7-494e-81ff-955f99fd6707/nova-metadata-log/0.log" Nov 25 15:27:12 crc kubenswrapper[4796]: I1125 15:27:12.977320 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fba50302-0f98-4117-ae49-f710e1543e98/mysql-bootstrap/0.log" Nov 25 15:27:12 crc kubenswrapper[4796]: I1125 15:27:12.999745 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f40e0fe8-470b-4092-a179-4e4df56f8900/nova-scheduler-scheduler/0.log" Nov 25 15:27:13 crc kubenswrapper[4796]: I1125 15:27:13.103840 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fba50302-0f98-4117-ae49-f710e1543e98/mysql-bootstrap/0.log" Nov 25 15:27:13 crc kubenswrapper[4796]: I1125 15:27:13.206247 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fba50302-0f98-4117-ae49-f710e1543e98/galera/0.log" Nov 25 15:27:13 crc kubenswrapper[4796]: I1125 15:27:13.371812 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_25a388f4-cd5a-404d-a777-46f4410e0b3a/mysql-bootstrap/0.log" Nov 25 15:27:13 crc kubenswrapper[4796]: I1125 15:27:13.577249 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_25a388f4-cd5a-404d-a777-46f4410e0b3a/mysql-bootstrap/0.log" Nov 25 15:27:13 crc kubenswrapper[4796]: I1125 15:27:13.598194 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_25a388f4-cd5a-404d-a777-46f4410e0b3a/galera/0.log" Nov 25 15:27:13 crc kubenswrapper[4796]: I1125 15:27:13.816563 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jftkt_9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718/ovn-controller/0.log" Nov 25 15:27:13 crc kubenswrapper[4796]: I1125 15:27:13.836767 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_120f9ac5-531c-4821-b033-d4b316f6ea61/openstackclient/0.log" Nov 25 15:27:13 crc kubenswrapper[4796]: I1125 15:27:13.876956 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_74f7062a-bcf7-494e-81ff-955f99fd6707/nova-metadata-metadata/0.log" Nov 25 15:27:14 crc kubenswrapper[4796]: I1125 15:27:14.113632 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t8mfd_5d31b742-a284-4a5f-a151-2ee4077a3071/openstack-network-exporter/0.log" Nov 25 15:27:14 crc kubenswrapper[4796]: I1125 15:27:14.258776 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bcptz_130773d9-cc1a-46d3-91a4-1880735e0351/ovsdb-server-init/0.log" Nov 25 15:27:14 crc kubenswrapper[4796]: I1125 15:27:14.395914 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bcptz_130773d9-cc1a-46d3-91a4-1880735e0351/ovs-vswitchd/0.log" Nov 25 15:27:14 crc kubenswrapper[4796]: I1125 15:27:14.501977 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bcptz_130773d9-cc1a-46d3-91a4-1880735e0351/ovsdb-server-init/0.log" Nov 25 15:27:14 crc kubenswrapper[4796]: I1125 15:27:14.536592 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bcptz_130773d9-cc1a-46d3-91a4-1880735e0351/ovsdb-server/0.log" Nov 25 15:27:14 crc kubenswrapper[4796]: I1125 15:27:14.729026 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fwnf6_a07af2cf-4057-4032-8535-6e8067892269/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:14 crc kubenswrapper[4796]: I1125 15:27:14.851340 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b5336ecd-5d7e-4b73-b2a7-d289b8578641/openstack-network-exporter/0.log" Nov 25 15:27:14 crc kubenswrapper[4796]: I1125 15:27:14.852997 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b5336ecd-5d7e-4b73-b2a7-d289b8578641/ovn-northd/0.log" Nov 25 15:27:15 crc kubenswrapper[4796]: I1125 15:27:15.004662 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064/openstack-network-exporter/0.log" Nov 25 15:27:15 crc kubenswrapper[4796]: I1125 15:27:15.136596 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064/ovsdbserver-nb/0.log" Nov 25 15:27:15 crc kubenswrapper[4796]: I1125 15:27:15.672029 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1c9e8c13-5a24-4394-bdc8-aa4965e931b8/openstack-network-exporter/0.log" Nov 25 15:27:15 crc kubenswrapper[4796]: I1125 15:27:15.693209 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1c9e8c13-5a24-4394-bdc8-aa4965e931b8/ovsdbserver-sb/0.log" Nov 25 15:27:15 crc kubenswrapper[4796]: I1125 15:27:15.856659 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79bd96dcd6-f2n5f_970dd58d-4266-4a39-9d8b-75190f4286bc/placement-api/0.log" Nov 25 15:27:15 crc kubenswrapper[4796]: I1125 15:27:15.974476 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79bd96dcd6-f2n5f_970dd58d-4266-4a39-9d8b-75190f4286bc/placement-log/0.log" Nov 25 15:27:16 crc kubenswrapper[4796]: I1125 15:27:16.068925 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f5d14d1f-b7c5-4d86-9420-fbf8a044780c/setup-container/0.log" Nov 25 15:27:16 crc kubenswrapper[4796]: I1125 15:27:16.297849 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f5d14d1f-b7c5-4d86-9420-fbf8a044780c/rabbitmq/0.log" Nov 25 15:27:16 crc kubenswrapper[4796]: I1125 15:27:16.363434 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f5d14d1f-b7c5-4d86-9420-fbf8a044780c/setup-container/0.log" Nov 25 15:27:16 crc kubenswrapper[4796]: I1125 15:27:16.389092 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0bde17cd-d557-45b1-8796-d7293d21c038/setup-container/0.log" Nov 25 15:27:16 crc kubenswrapper[4796]: I1125 15:27:16.630335 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0bde17cd-d557-45b1-8796-d7293d21c038/setup-container/0.log" Nov 25 15:27:16 crc kubenswrapper[4796]: I1125 15:27:16.688081 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q_b8bdd873-343d-4d77-849e-14786c8db01d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:16 crc kubenswrapper[4796]: I1125 15:27:16.751115 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0bde17cd-d557-45b1-8796-d7293d21c038/rabbitmq/0.log" Nov 25 15:27:16 crc kubenswrapper[4796]: I1125 15:27:16.931650 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8rjcv_3fc16f66-6859-4f61-bdbb-7deaf5ec6831/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:17 crc kubenswrapper[4796]: I1125 15:27:17.128925 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q_6699babf-2b9f-432c-b0fd-60452bb9ad6b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:17 crc kubenswrapper[4796]: I1125 15:27:17.226972 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8pk87_39a7e6ad-f344-409f-b5a0-664a602fdf66/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:17 crc kubenswrapper[4796]: I1125 15:27:17.352652 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8zlff_60a504c5-7f00-43a4-a364-c3be0b31a42d/ssh-known-hosts-edpm-deployment/0.log" Nov 25 15:27:17 crc kubenswrapper[4796]: I1125 15:27:17.591147 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b6dc55d99-xcq8j_05a9e311-75a5-4732-9103-ba2bc1e708ad/proxy-server/0.log" Nov 25 15:27:17 crc kubenswrapper[4796]: I1125 15:27:17.672670 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b6dc55d99-xcq8j_05a9e311-75a5-4732-9103-ba2bc1e708ad/proxy-httpd/0.log" Nov 25 15:27:17 crc kubenswrapper[4796]: I1125 15:27:17.714390 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qbvtm_8a9e78aa-7f69-46de-b6a9-03f837e4f364/swift-ring-rebalance/0.log" Nov 25 15:27:17 crc kubenswrapper[4796]: I1125 15:27:17.832842 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/account-auditor/0.log" Nov 25 15:27:17 crc kubenswrapper[4796]: I1125 15:27:17.897622 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/account-reaper/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.006836 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/account-replicator/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.076852 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/account-server/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.130191 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/container-auditor/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.169627 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/container-replicator/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.270855 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/container-server/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.283623 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/container-updater/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.366363 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-auditor/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.498386 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-expirer/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.547373 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-server/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.549014 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-replicator/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.615291 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-updater/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.731756 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/rsync/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.748122 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/swift-recon-cron/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.896693 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-99788_885ec954-19ea-488f-badc-9dc879859a45/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:18 crc kubenswrapper[4796]: I1125 15:27:18.985562 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6/tempest-tests-tempest-tests-runner/0.log" Nov 25 15:27:19 crc kubenswrapper[4796]: I1125 15:27:19.171308 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_99ad25b2-341c-43c5-a15a-12b70e1711b3/test-operator-logs-container/0.log" Nov 25 15:27:19 crc kubenswrapper[4796]: I1125 15:27:19.408953 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:27:19 crc kubenswrapper[4796]: E1125 15:27:19.409277 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:27:19 crc kubenswrapper[4796]: I1125 15:27:19.419151 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7psgw_f7f8ec51-957f-4356-888b-5bec99691717/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:27:27 crc kubenswrapper[4796]: I1125 15:27:27.089371 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_241f82db-29d5-4cb8-bd81-3e758b9cd855/memcached/0.log" Nov 25 15:27:34 crc kubenswrapper[4796]: I1125 15:27:34.409125 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:27:34 crc kubenswrapper[4796]: E1125 15:27:34.409988 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:27:46 crc kubenswrapper[4796]: I1125 15:27:46.806837 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-7q45f_3472c0d0-0763-4342-83cb-5b7a44e5b2e0/kube-rbac-proxy/0.log" Nov 25 15:27:46 crc kubenswrapper[4796]: I1125 15:27:46.893901 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-7q45f_3472c0d0-0763-4342-83cb-5b7a44e5b2e0/manager/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.047214 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4w4wl_3a3976ed-e631-4fda-9b60-1e4b62992c70/kube-rbac-proxy/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.077626 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4w4wl_3a3976ed-e631-4fda-9b60-1e4b62992c70/manager/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.215709 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-47sfh_efaf4581-131a-496d-ba2f-75db34748600/kube-rbac-proxy/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.225920 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-47sfh_efaf4581-131a-496d-ba2f-75db34748600/manager/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.267879 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/util/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.486813 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/pull/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.486982 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/pull/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.518344 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/util/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.676857 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/util/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.709019 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/pull/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.712634 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/extract/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.868993 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nfdb6_ed513bf3-e75f-40b3-814e-508f4d9e9ce6/kube-rbac-proxy/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.982761 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-w8gkv_4f74b624-2ef6-4289-8cb1-8d6babc260f5/kube-rbac-proxy/0.log" Nov 25 15:27:47 crc kubenswrapper[4796]: I1125 15:27:47.990877 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nfdb6_ed513bf3-e75f-40b3-814e-508f4d9e9ce6/manager/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.031260 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-w8gkv_4f74b624-2ef6-4289-8cb1-8d6babc260f5/manager/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.146459 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-7ljk7_5e82891b-b135-4f6a-8341-7ae6efb7d7ab/kube-rbac-proxy/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.194107 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-7ljk7_5e82891b-b135-4f6a-8341-7ae6efb7d7ab/manager/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.310273 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-tbmwj_9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff/kube-rbac-proxy/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.409129 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:27:48 crc kubenswrapper[4796]: E1125 15:27:48.409603 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.490641 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-7xmhd_c20eb9b8-4c87-4145-b550-e887fd680797/kube-rbac-proxy/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.491700 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-tbmwj_9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff/manager/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.526828 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-7xmhd_c20eb9b8-4c87-4145-b550-e887fd680797/manager/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.657121 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-wqkh5_f1937d85-62aa-4880-81ca-91d58ab2fba2/kube-rbac-proxy/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.751003 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-wqkh5_f1937d85-62aa-4880-81ca-91d58ab2fba2/manager/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.783592 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-v9j5d_e5bf5c53-1a09-4635-9ebb-e2a6fb722e06/kube-rbac-proxy/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.844951 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-v9j5d_e5bf5c53-1a09-4635-9ebb-e2a6fb722e06/manager/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.940969 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-h96k8_7cda050e-831a-42f8-93f7-c33e10a8b119/kube-rbac-proxy/0.log" Nov 25 15:27:48 crc kubenswrapper[4796]: I1125 15:27:48.989057 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-h96k8_7cda050e-831a-42f8-93f7-c33e10a8b119/manager/0.log" Nov 25 15:27:49 crc kubenswrapper[4796]: I1125 15:27:49.093495 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-mfg66_b652a700-3131-4706-a300-c3f2c54519a3/kube-rbac-proxy/0.log" Nov 25 15:27:49 crc kubenswrapper[4796]: I1125 15:27:49.171378 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-mfg66_b652a700-3131-4706-a300-c3f2c54519a3/manager/0.log" Nov 25 15:27:49 crc kubenswrapper[4796]: I1125 15:27:49.331889 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-7c6bw_5575133b-4226-4a90-b484-aeb1bbcb4dde/kube-rbac-proxy/0.log" Nov 25 15:27:49 crc kubenswrapper[4796]: I1125 15:27:49.373676 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-7c6bw_5575133b-4226-4a90-b484-aeb1bbcb4dde/manager/0.log" Nov 25 15:27:49 crc kubenswrapper[4796]: I1125 15:27:49.422168 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-jsccj_4e72b995-27a7-4777-9d17-7b04a3933074/kube-rbac-proxy/0.log" Nov 25 15:27:49 crc kubenswrapper[4796]: I1125 15:27:49.534849 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-jsccj_4e72b995-27a7-4777-9d17-7b04a3933074/manager/0.log" Nov 25 15:27:49 crc kubenswrapper[4796]: I1125 15:27:49.617928 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh_399a4df5-120a-40fc-9570-4555ab767e70/kube-rbac-proxy/0.log" Nov 25 15:27:49 crc kubenswrapper[4796]: I1125 15:27:49.628088 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh_399a4df5-120a-40fc-9570-4555ab767e70/manager/0.log" Nov 25 15:27:49 crc kubenswrapper[4796]: I1125 15:27:49.998372 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-28ljh_edc88d92-5818-49e5-877c-5efd6a8e1912/registry-server/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.029440 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5fd4b8b4b5-s2rpd_742f74a5-8ef5-42df-8644-16b6209f5172/operator/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.209909 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-jg56b_2d798aaf-7f02-472d-a5c9-53853ce7b2a4/kube-rbac-proxy/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.278819 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-jg56b_2d798aaf-7f02-472d-a5c9-53853ce7b2a4/manager/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.355463 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-z7q4q_217cf053-2a6e-4fbd-8544-830952c6c803/kube-rbac-proxy/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.458061 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-z7q4q_217cf053-2a6e-4fbd-8544-830952c6c803/manager/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.572762 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7b78n_833cc3da-1e55-4b00-9766-5bc81f81a506/operator/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.749223 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-6rrmf_5871d7ea-743f-4b9b-9d49-e02f51222ea7/manager/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.754294 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-6rrmf_5871d7ea-743f-4b9b-9d49-e02f51222ea7/kube-rbac-proxy/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.838620 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-2v9lc_bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e/kube-rbac-proxy/0.log" Nov 25 15:27:50 crc kubenswrapper[4796]: I1125 15:27:50.941435 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77bf44fb75-9sjgx_909ee785-5087-4b08-9590-10993e0fdeba/manager/0.log" Nov 25 15:27:51 crc kubenswrapper[4796]: I1125 15:27:51.029200 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-2v9lc_bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e/manager/0.log" Nov 25 15:27:51 crc kubenswrapper[4796]: I1125 15:27:51.065233 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-6bbxk_dba98963-8ddb-46d0-a6a7-62f337d6d520/kube-rbac-proxy/0.log" Nov 25 15:27:51 crc kubenswrapper[4796]: I1125 15:27:51.089427 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-6bbxk_dba98963-8ddb-46d0-a6a7-62f337d6d520/manager/0.log" Nov 25 15:27:51 crc kubenswrapper[4796]: I1125 15:27:51.230229 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-99zgm_312c47f9-34dd-4416-b396-fd4f9855e72e/kube-rbac-proxy/0.log" Nov 25 15:27:51 crc kubenswrapper[4796]: I1125 15:27:51.249750 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-99zgm_312c47f9-34dd-4416-b396-fd4f9855e72e/manager/0.log" Nov 25 15:28:01 crc kubenswrapper[4796]: I1125 15:28:01.409620 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:28:01 crc kubenswrapper[4796]: E1125 15:28:01.410519 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:28:10 crc kubenswrapper[4796]: I1125 15:28:10.189967 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-64jzs_63aeb87d-a8b1-40a5-95b9-e224d1bd968f/control-plane-machine-set-operator/0.log" Nov 25 15:28:10 crc kubenswrapper[4796]: I1125 15:28:10.334173 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lvdx5_67c0424c-b0ff-417d-bf4c-1cdcadd1ebac/kube-rbac-proxy/0.log" Nov 25 15:28:10 crc kubenswrapper[4796]: I1125 15:28:10.402076 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lvdx5_67c0424c-b0ff-417d-bf4c-1cdcadd1ebac/machine-api-operator/0.log" Nov 25 15:28:13 crc kubenswrapper[4796]: I1125 15:28:13.410408 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:28:13 crc kubenswrapper[4796]: E1125 15:28:13.411763 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:28:23 crc kubenswrapper[4796]: I1125 15:28:23.873912 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-qzs2l_4b5c4e21-18ed-4eee-a81a-f08cf71498e5/cert-manager-controller/0.log" Nov 25 15:28:24 crc kubenswrapper[4796]: I1125 15:28:24.068547 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-n7x98_d7365735-d514-48fd-9113-62a80d791d8b/cert-manager-webhook/0.log" Nov 25 15:28:24 crc kubenswrapper[4796]: I1125 15:28:24.101297 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-ttph6_67aeab52-9ff0-430d-8e78-0f46f59e1688/cert-manager-cainjector/0.log" Nov 25 15:28:26 crc kubenswrapper[4796]: I1125 15:28:26.409689 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:28:26 crc kubenswrapper[4796]: E1125 15:28:26.410139 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:28:39 crc kubenswrapper[4796]: I1125 15:28:39.387541 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-74nqq_ebb6d789-f33f-47d5-a8b5-b727a0d54def/nmstate-console-plugin/0.log" Nov 25 15:28:39 crc kubenswrapper[4796]: I1125 15:28:39.562664 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5whlr_d050fb17-6f98-4899-861e-b180f1587b64/nmstate-handler/0.log" Nov 25 15:28:39 crc kubenswrapper[4796]: I1125 15:28:39.635190 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-z2g7r_b129a211-721a-412c-95fd-a1c27b7d3092/kube-rbac-proxy/0.log" Nov 25 15:28:39 crc kubenswrapper[4796]: I1125 15:28:39.679736 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-z2g7r_b129a211-721a-412c-95fd-a1c27b7d3092/nmstate-metrics/0.log" Nov 25 15:28:39 crc kubenswrapper[4796]: I1125 15:28:39.835912 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-kcqf5_5c8c5a1b-b996-41da-96ab-07156e73016f/nmstate-operator/0.log" Nov 25 15:28:39 crc kubenswrapper[4796]: I1125 15:28:39.880963 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-2mjnf_7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67/nmstate-webhook/0.log" Nov 25 15:28:41 crc kubenswrapper[4796]: I1125 15:28:41.409525 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:28:41 crc kubenswrapper[4796]: E1125 15:28:41.410433 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:28:55 crc kubenswrapper[4796]: I1125 15:28:55.409600 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:28:55 crc kubenswrapper[4796]: E1125 15:28:55.410518 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:28:56 crc kubenswrapper[4796]: I1125 15:28:56.455412 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-zr8xl_1979dccd-b017-42f5-9fe1-8717af3f948a/kube-rbac-proxy/0.log" Nov 25 15:28:56 crc kubenswrapper[4796]: I1125 15:28:56.639143 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-zr8xl_1979dccd-b017-42f5-9fe1-8717af3f948a/controller/0.log" Nov 25 15:28:56 crc kubenswrapper[4796]: I1125 15:28:56.720430 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-frr-files/0.log" Nov 25 15:28:56 crc kubenswrapper[4796]: I1125 15:28:56.924697 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-metrics/0.log" Nov 25 15:28:56 crc kubenswrapper[4796]: I1125 15:28:56.931622 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-reloader/0.log" Nov 25 15:28:56 crc kubenswrapper[4796]: I1125 15:28:56.956284 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-reloader/0.log" Nov 25 15:28:56 crc kubenswrapper[4796]: I1125 15:28:56.962005 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-frr-files/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.121849 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-frr-files/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.160506 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-metrics/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.168957 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-reloader/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.173068 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-metrics/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.363863 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-frr-files/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.419621 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/controller/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.422380 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-metrics/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.426550 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-reloader/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.772724 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/frr-metrics/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.809998 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/kube-rbac-proxy/0.log" Nov 25 15:28:57 crc kubenswrapper[4796]: I1125 15:28:57.854072 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/kube-rbac-proxy-frr/0.log" Nov 25 15:28:58 crc kubenswrapper[4796]: I1125 15:28:58.037275 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/reloader/0.log" Nov 25 15:28:58 crc kubenswrapper[4796]: I1125 15:28:58.099779 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-zk9xk_79869a5f-b9a3-46e0-bac7-9ff9ac72b16c/frr-k8s-webhook-server/0.log" Nov 25 15:28:58 crc kubenswrapper[4796]: I1125 15:28:58.291146 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68786bb9d9-qc95x_5f701779-96c6-4764-b207-88847114d7c8/manager/0.log" Nov 25 15:28:58 crc kubenswrapper[4796]: I1125 15:28:58.449406 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-778544677-4pg8n_5a58cf97-35a8-4201-91b5-c03fce0361b8/webhook-server/0.log" Nov 25 15:28:58 crc kubenswrapper[4796]: I1125 15:28:58.540096 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq8m7_7f037a6b-9e7f-401d-b4db-98132fb0f9b2/kube-rbac-proxy/0.log" Nov 25 15:28:59 crc kubenswrapper[4796]: I1125 15:28:59.147089 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq8m7_7f037a6b-9e7f-401d-b4db-98132fb0f9b2/speaker/0.log" Nov 25 15:28:59 crc kubenswrapper[4796]: I1125 15:28:59.178929 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/frr/0.log" Nov 25 15:29:07 crc kubenswrapper[4796]: I1125 15:29:07.409560 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:29:07 crc kubenswrapper[4796]: E1125 15:29:07.410403 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:29:12 crc kubenswrapper[4796]: I1125 15:29:12.648966 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/util/0.log" Nov 25 15:29:12 crc kubenswrapper[4796]: I1125 15:29:12.880353 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/util/0.log" Nov 25 15:29:12 crc kubenswrapper[4796]: I1125 15:29:12.887255 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/pull/0.log" Nov 25 15:29:12 crc kubenswrapper[4796]: I1125 15:29:12.920966 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/pull/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.056589 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/pull/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.060423 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/extract/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.069047 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/util/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.248713 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-utilities/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.421014 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-utilities/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.434131 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-content/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.466867 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-content/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.565551 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-utilities/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.626850 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-content/0.log" Nov 25 15:29:13 crc kubenswrapper[4796]: I1125 15:29:13.827881 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-utilities/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.046793 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-content/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.094171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-utilities/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.175029 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-content/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.179174 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/registry-server/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.325195 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-utilities/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.330697 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-content/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.478028 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/registry-server/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.581149 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/util/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.794906 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/pull/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.817970 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/pull/0.log" Nov 25 15:29:14 crc kubenswrapper[4796]: I1125 15:29:14.823435 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/util/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.002559 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/extract/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.019860 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/pull/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.034412 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/util/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.190521 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hfxxz_f1695f85-c20b-4708-b4f0-006f3a269301/marketplace-operator/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.263226 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-utilities/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.428004 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-utilities/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.463997 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-content/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.468135 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-content/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.676977 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-content/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.691927 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-utilities/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.824102 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/registry-server/0.log" Nov 25 15:29:15 crc kubenswrapper[4796]: I1125 15:29:15.888161 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-utilities/0.log" Nov 25 15:29:16 crc kubenswrapper[4796]: I1125 15:29:16.104905 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-utilities/0.log" Nov 25 15:29:16 crc kubenswrapper[4796]: I1125 15:29:16.106824 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-content/0.log" Nov 25 15:29:16 crc kubenswrapper[4796]: I1125 15:29:16.154362 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-content/0.log" Nov 25 15:29:16 crc kubenswrapper[4796]: I1125 15:29:16.294993 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-content/0.log" Nov 25 15:29:16 crc kubenswrapper[4796]: I1125 15:29:16.297444 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-utilities/0.log" Nov 25 15:29:16 crc kubenswrapper[4796]: I1125 15:29:16.424078 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/registry-server/0.log" Nov 25 15:29:22 crc kubenswrapper[4796]: I1125 15:29:22.421307 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:29:22 crc kubenswrapper[4796]: E1125 15:29:22.422339 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:29:35 crc kubenswrapper[4796]: I1125 15:29:35.409801 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:29:35 crc kubenswrapper[4796]: E1125 15:29:35.410762 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.398094 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zc7j4"] Nov 25 15:29:40 crc kubenswrapper[4796]: E1125 15:29:40.398985 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2fcb24-010d-49f0-85a0-bd78ad0b021c" containerName="container-00" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.399001 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2fcb24-010d-49f0-85a0-bd78ad0b021c" containerName="container-00" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.399188 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2fcb24-010d-49f0-85a0-bd78ad0b021c" containerName="container-00" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.400796 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.428788 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zc7j4"] Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.535907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qkvn\" (UniqueName: \"kubernetes.io/projected/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-kube-api-access-8qkvn\") pod \"redhat-operators-zc7j4\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.536003 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-utilities\") pod \"redhat-operators-zc7j4\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.536068 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-catalog-content\") pod \"redhat-operators-zc7j4\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.638135 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qkvn\" (UniqueName: \"kubernetes.io/projected/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-kube-api-access-8qkvn\") pod \"redhat-operators-zc7j4\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.638186 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-utilities\") pod \"redhat-operators-zc7j4\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.638236 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-catalog-content\") pod \"redhat-operators-zc7j4\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.638783 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-utilities\") pod \"redhat-operators-zc7j4\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.638849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-catalog-content\") pod \"redhat-operators-zc7j4\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.673727 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qkvn\" (UniqueName: \"kubernetes.io/projected/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-kube-api-access-8qkvn\") pod \"redhat-operators-zc7j4\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:40 crc kubenswrapper[4796]: I1125 15:29:40.730125 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:41 crc kubenswrapper[4796]: I1125 15:29:41.241957 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zc7j4"] Nov 25 15:29:41 crc kubenswrapper[4796]: I1125 15:29:41.963168 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerID="f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8" exitCode=0 Nov 25 15:29:41 crc kubenswrapper[4796]: I1125 15:29:41.963335 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc7j4" event={"ID":"f4d7a454-7a02-478f-bbfe-a4421bd7d39f","Type":"ContainerDied","Data":"f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8"} Nov 25 15:29:41 crc kubenswrapper[4796]: I1125 15:29:41.963712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc7j4" event={"ID":"f4d7a454-7a02-478f-bbfe-a4421bd7d39f","Type":"ContainerStarted","Data":"0934bd9d09623dd6ff9abe9305cf07ca3e291cebec01d5fa35a34c1f9f5f8349"} Nov 25 15:29:43 crc kubenswrapper[4796]: I1125 15:29:43.985404 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc7j4" event={"ID":"f4d7a454-7a02-478f-bbfe-a4421bd7d39f","Type":"ContainerStarted","Data":"53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664"} Nov 25 15:29:46 crc kubenswrapper[4796]: I1125 15:29:46.409539 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:29:46 crc kubenswrapper[4796]: E1125 15:29:46.409979 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:29:47 crc kubenswrapper[4796]: I1125 15:29:47.028206 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerID="53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664" exitCode=0 Nov 25 15:29:47 crc kubenswrapper[4796]: I1125 15:29:47.028285 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc7j4" event={"ID":"f4d7a454-7a02-478f-bbfe-a4421bd7d39f","Type":"ContainerDied","Data":"53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664"} Nov 25 15:29:49 crc kubenswrapper[4796]: I1125 15:29:49.048192 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc7j4" event={"ID":"f4d7a454-7a02-478f-bbfe-a4421bd7d39f","Type":"ContainerStarted","Data":"00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2"} Nov 25 15:29:49 crc kubenswrapper[4796]: I1125 15:29:49.087567 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zc7j4" podStartSLOduration=3.63376105 podStartE2EDuration="9.087548236s" podCreationTimestamp="2025-11-25 15:29:40 +0000 UTC" firstStartedPulling="2025-11-25 15:29:41.970868291 +0000 UTC m=+3910.313977715" lastFinishedPulling="2025-11-25 15:29:47.424655467 +0000 UTC m=+3915.767764901" observedRunningTime="2025-11-25 15:29:49.085517102 +0000 UTC m=+3917.428626526" watchObservedRunningTime="2025-11-25 15:29:49.087548236 +0000 UTC m=+3917.430657660" Nov 25 15:29:50 crc kubenswrapper[4796]: I1125 15:29:50.730626 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:50 crc kubenswrapper[4796]: I1125 15:29:50.730955 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:29:51 crc kubenswrapper[4796]: I1125 15:29:51.781381 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zc7j4" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerName="registry-server" probeResult="failure" output=< Nov 25 15:29:51 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Nov 25 15:29:51 crc kubenswrapper[4796]: > Nov 25 15:29:58 crc kubenswrapper[4796]: I1125 15:29:58.411027 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:29:58 crc kubenswrapper[4796]: E1125 15:29:58.412386 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.189688 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb"] Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.191729 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.194759 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.194773 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.210966 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb"] Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.343407 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef2ae84-3e27-404e-ba31-d178c77eb69e-secret-volume\") pod \"collect-profiles-29401410-6dzwb\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.343477 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24p78\" (UniqueName: \"kubernetes.io/projected/fef2ae84-3e27-404e-ba31-d178c77eb69e-kube-api-access-24p78\") pod \"collect-profiles-29401410-6dzwb\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.343694 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef2ae84-3e27-404e-ba31-d178c77eb69e-config-volume\") pod \"collect-profiles-29401410-6dzwb\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.445767 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef2ae84-3e27-404e-ba31-d178c77eb69e-secret-volume\") pod \"collect-profiles-29401410-6dzwb\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.445814 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24p78\" (UniqueName: \"kubernetes.io/projected/fef2ae84-3e27-404e-ba31-d178c77eb69e-kube-api-access-24p78\") pod \"collect-profiles-29401410-6dzwb\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.445896 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef2ae84-3e27-404e-ba31-d178c77eb69e-config-volume\") pod \"collect-profiles-29401410-6dzwb\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.446840 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef2ae84-3e27-404e-ba31-d178c77eb69e-config-volume\") pod \"collect-profiles-29401410-6dzwb\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.459380 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef2ae84-3e27-404e-ba31-d178c77eb69e-secret-volume\") pod \"collect-profiles-29401410-6dzwb\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.465623 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24p78\" (UniqueName: \"kubernetes.io/projected/fef2ae84-3e27-404e-ba31-d178c77eb69e-kube-api-access-24p78\") pod \"collect-profiles-29401410-6dzwb\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.527485 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.791175 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:30:00 crc kubenswrapper[4796]: I1125 15:30:00.864741 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:30:01 crc kubenswrapper[4796]: I1125 15:30:01.025638 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb"] Nov 25 15:30:01 crc kubenswrapper[4796]: I1125 15:30:01.039524 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zc7j4"] Nov 25 15:30:01 crc kubenswrapper[4796]: I1125 15:30:01.155593 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" event={"ID":"fef2ae84-3e27-404e-ba31-d178c77eb69e","Type":"ContainerStarted","Data":"fae9c20d5dd65b9f3f5154bed118e8714905670e55dba071f2875fec57701aa5"} Nov 25 15:30:02 crc kubenswrapper[4796]: I1125 15:30:02.177813 4796 generic.go:334] "Generic (PLEG): container finished" podID="fef2ae84-3e27-404e-ba31-d178c77eb69e" containerID="4a64ea0a7513f5a0bdca43016fc43c6ac5fba2681b41f7242a49532c90b4321b" exitCode=0 Nov 25 15:30:02 crc kubenswrapper[4796]: I1125 15:30:02.178181 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zc7j4" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerName="registry-server" containerID="cri-o://00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2" gracePeriod=2 Nov 25 15:30:02 crc kubenswrapper[4796]: I1125 15:30:02.178959 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" event={"ID":"fef2ae84-3e27-404e-ba31-d178c77eb69e","Type":"ContainerDied","Data":"4a64ea0a7513f5a0bdca43016fc43c6ac5fba2681b41f7242a49532c90b4321b"} Nov 25 15:30:02 crc kubenswrapper[4796]: I1125 15:30:02.855890 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.004167 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qkvn\" (UniqueName: \"kubernetes.io/projected/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-kube-api-access-8qkvn\") pod \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.004931 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-utilities\") pod \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.005126 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-catalog-content\") pod \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\" (UID: \"f4d7a454-7a02-478f-bbfe-a4421bd7d39f\") " Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.005725 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-utilities" (OuterVolumeSpecName: "utilities") pod "f4d7a454-7a02-478f-bbfe-a4421bd7d39f" (UID: "f4d7a454-7a02-478f-bbfe-a4421bd7d39f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.006036 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.021888 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-kube-api-access-8qkvn" (OuterVolumeSpecName: "kube-api-access-8qkvn") pod "f4d7a454-7a02-478f-bbfe-a4421bd7d39f" (UID: "f4d7a454-7a02-478f-bbfe-a4421bd7d39f"). InnerVolumeSpecName "kube-api-access-8qkvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.107525 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4d7a454-7a02-478f-bbfe-a4421bd7d39f" (UID: "f4d7a454-7a02-478f-bbfe-a4421bd7d39f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.108298 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.108424 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qkvn\" (UniqueName: \"kubernetes.io/projected/f4d7a454-7a02-478f-bbfe-a4421bd7d39f-kube-api-access-8qkvn\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.189035 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerID="00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2" exitCode=0 Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.189091 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc7j4" event={"ID":"f4d7a454-7a02-478f-bbfe-a4421bd7d39f","Type":"ContainerDied","Data":"00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2"} Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.190198 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc7j4" event={"ID":"f4d7a454-7a02-478f-bbfe-a4421bd7d39f","Type":"ContainerDied","Data":"0934bd9d09623dd6ff9abe9305cf07ca3e291cebec01d5fa35a34c1f9f5f8349"} Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.189122 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zc7j4" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.190240 4796 scope.go:117] "RemoveContainer" containerID="00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.229184 4796 scope.go:117] "RemoveContainer" containerID="53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.245515 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zc7j4"] Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.254469 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zc7j4"] Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.261795 4796 scope.go:117] "RemoveContainer" containerID="f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.335451 4796 scope.go:117] "RemoveContainer" containerID="00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2" Nov 25 15:30:03 crc kubenswrapper[4796]: E1125 15:30:03.335984 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2\": container with ID starting with 00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2 not found: ID does not exist" containerID="00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.336049 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2"} err="failed to get container status \"00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2\": rpc error: code = NotFound desc = could not find container \"00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2\": container with ID starting with 00653f2b7f5aaee856aca5a63e7309d08bceff14dbd0cbd593e2afe1bbcd5ea2 not found: ID does not exist" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.336085 4796 scope.go:117] "RemoveContainer" containerID="53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664" Nov 25 15:30:03 crc kubenswrapper[4796]: E1125 15:30:03.336473 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664\": container with ID starting with 53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664 not found: ID does not exist" containerID="53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.336538 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664"} err="failed to get container status \"53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664\": rpc error: code = NotFound desc = could not find container \"53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664\": container with ID starting with 53855aea7115dae5b2c44e1922463621a8512fd97691feb863d918e7ad6cb664 not found: ID does not exist" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.336602 4796 scope.go:117] "RemoveContainer" containerID="f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8" Nov 25 15:30:03 crc kubenswrapper[4796]: E1125 15:30:03.336967 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8\": container with ID starting with f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8 not found: ID does not exist" containerID="f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.337014 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8"} err="failed to get container status \"f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8\": rpc error: code = NotFound desc = could not find container \"f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8\": container with ID starting with f08211c3ff960f365bdc520c463c64ddd92ddd643e399f9715b3a2944b1296a8 not found: ID does not exist" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.601531 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.730803 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef2ae84-3e27-404e-ba31-d178c77eb69e-config-volume\") pod \"fef2ae84-3e27-404e-ba31-d178c77eb69e\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.730906 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24p78\" (UniqueName: \"kubernetes.io/projected/fef2ae84-3e27-404e-ba31-d178c77eb69e-kube-api-access-24p78\") pod \"fef2ae84-3e27-404e-ba31-d178c77eb69e\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.731645 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef2ae84-3e27-404e-ba31-d178c77eb69e-config-volume" (OuterVolumeSpecName: "config-volume") pod "fef2ae84-3e27-404e-ba31-d178c77eb69e" (UID: "fef2ae84-3e27-404e-ba31-d178c77eb69e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.731904 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef2ae84-3e27-404e-ba31-d178c77eb69e-secret-volume\") pod \"fef2ae84-3e27-404e-ba31-d178c77eb69e\" (UID: \"fef2ae84-3e27-404e-ba31-d178c77eb69e\") " Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.732485 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef2ae84-3e27-404e-ba31-d178c77eb69e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.736834 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef2ae84-3e27-404e-ba31-d178c77eb69e-kube-api-access-24p78" (OuterVolumeSpecName: "kube-api-access-24p78") pod "fef2ae84-3e27-404e-ba31-d178c77eb69e" (UID: "fef2ae84-3e27-404e-ba31-d178c77eb69e"). InnerVolumeSpecName "kube-api-access-24p78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.737455 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef2ae84-3e27-404e-ba31-d178c77eb69e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fef2ae84-3e27-404e-ba31-d178c77eb69e" (UID: "fef2ae84-3e27-404e-ba31-d178c77eb69e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.834673 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24p78\" (UniqueName: \"kubernetes.io/projected/fef2ae84-3e27-404e-ba31-d178c77eb69e-kube-api-access-24p78\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:03 crc kubenswrapper[4796]: I1125 15:30:03.834709 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef2ae84-3e27-404e-ba31-d178c77eb69e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:04 crc kubenswrapper[4796]: I1125 15:30:04.207454 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" event={"ID":"fef2ae84-3e27-404e-ba31-d178c77eb69e","Type":"ContainerDied","Data":"fae9c20d5dd65b9f3f5154bed118e8714905670e55dba071f2875fec57701aa5"} Nov 25 15:30:04 crc kubenswrapper[4796]: I1125 15:30:04.207486 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-6dzwb" Nov 25 15:30:04 crc kubenswrapper[4796]: I1125 15:30:04.207525 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae9c20d5dd65b9f3f5154bed118e8714905670e55dba071f2875fec57701aa5" Nov 25 15:30:04 crc kubenswrapper[4796]: I1125 15:30:04.420797 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" path="/var/lib/kubelet/pods/f4d7a454-7a02-478f-bbfe-a4421bd7d39f/volumes" Nov 25 15:30:04 crc kubenswrapper[4796]: I1125 15:30:04.692454 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx"] Nov 25 15:30:04 crc kubenswrapper[4796]: I1125 15:30:04.700128 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-tfbfx"] Nov 25 15:30:06 crc kubenswrapper[4796]: I1125 15:30:06.427124 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fd65b2-a144-49bd-8e5d-5cc42a812348" path="/var/lib/kubelet/pods/34fd65b2-a144-49bd-8e5d-5cc42a812348/volumes" Nov 25 15:30:11 crc kubenswrapper[4796]: I1125 15:30:11.409326 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:30:11 crc kubenswrapper[4796]: E1125 15:30:11.410163 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:30:22 crc kubenswrapper[4796]: I1125 15:30:22.428082 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:30:23 crc kubenswrapper[4796]: I1125 15:30:23.437677 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"ab149f04ee33eb6fa179e2fae0783da6b2b9681d3eecf03b9d858d176b0d61b6"} Nov 25 15:30:49 crc kubenswrapper[4796]: I1125 15:30:49.306144 4796 scope.go:117] "RemoveContainer" containerID="13783ee45b22b874c27eabc4868a95fdde849ab4769e5e8d964da083f42995b0" Nov 25 15:30:58 crc kubenswrapper[4796]: I1125 15:30:58.839976 4796 generic.go:334] "Generic (PLEG): container finished" podID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" containerID="be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d" exitCode=0 Nov 25 15:30:58 crc kubenswrapper[4796]: I1125 15:30:58.840162 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mw55g/must-gather-nlpgl" event={"ID":"22e13480-3aaf-4df3-8c30-1cd8f2b33e55","Type":"ContainerDied","Data":"be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d"} Nov 25 15:30:58 crc kubenswrapper[4796]: I1125 15:30:58.841855 4796 scope.go:117] "RemoveContainer" containerID="be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d" Nov 25 15:30:59 crc kubenswrapper[4796]: I1125 15:30:59.207020 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mw55g_must-gather-nlpgl_22e13480-3aaf-4df3-8c30-1cd8f2b33e55/gather/0.log" Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.111302 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mw55g/must-gather-nlpgl"] Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.112549 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mw55g/must-gather-nlpgl" podUID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" containerName="copy" containerID="cri-o://6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e" gracePeriod=2 Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.119074 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mw55g/must-gather-nlpgl"] Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.760311 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mw55g_must-gather-nlpgl_22e13480-3aaf-4df3-8c30-1cd8f2b33e55/copy/0.log" Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.761313 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.892636 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctnf8\" (UniqueName: \"kubernetes.io/projected/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-kube-api-access-ctnf8\") pod \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\" (UID: \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\") " Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.892834 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-must-gather-output\") pod \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\" (UID: \"22e13480-3aaf-4df3-8c30-1cd8f2b33e55\") " Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.898660 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-kube-api-access-ctnf8" (OuterVolumeSpecName: "kube-api-access-ctnf8") pod "22e13480-3aaf-4df3-8c30-1cd8f2b33e55" (UID: "22e13480-3aaf-4df3-8c30-1cd8f2b33e55"). InnerVolumeSpecName "kube-api-access-ctnf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.921029 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mw55g_must-gather-nlpgl_22e13480-3aaf-4df3-8c30-1cd8f2b33e55/copy/0.log" Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.921429 4796 generic.go:334] "Generic (PLEG): container finished" podID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" containerID="6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e" exitCode=143 Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.921488 4796 scope.go:117] "RemoveContainer" containerID="6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e" Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.921638 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mw55g/must-gather-nlpgl" Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.960245 4796 scope.go:117] "RemoveContainer" containerID="be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d" Nov 25 15:31:07 crc kubenswrapper[4796]: I1125 15:31:07.996463 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctnf8\" (UniqueName: \"kubernetes.io/projected/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-kube-api-access-ctnf8\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:08 crc kubenswrapper[4796]: I1125 15:31:08.033827 4796 scope.go:117] "RemoveContainer" containerID="6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e" Nov 25 15:31:08 crc kubenswrapper[4796]: E1125 15:31:08.034368 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e\": container with ID starting with 6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e not found: ID does not exist" containerID="6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e" Nov 25 15:31:08 crc kubenswrapper[4796]: I1125 15:31:08.034441 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e"} err="failed to get container status \"6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e\": rpc error: code = NotFound desc = could not find container \"6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e\": container with ID starting with 6983b789376ca1dd809ff3ebc1473de66511d134934bfea717b7a190f538ae1e not found: ID does not exist" Nov 25 15:31:08 crc kubenswrapper[4796]: I1125 15:31:08.034508 4796 scope.go:117] "RemoveContainer" containerID="be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d" Nov 25 15:31:08 crc kubenswrapper[4796]: E1125 15:31:08.034827 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d\": container with ID starting with be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d not found: ID does not exist" containerID="be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d" Nov 25 15:31:08 crc kubenswrapper[4796]: I1125 15:31:08.034886 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d"} err="failed to get container status \"be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d\": rpc error: code = NotFound desc = could not find container \"be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d\": container with ID starting with be169575477e0714cd5760e4e5f53bb83798d81486e91313d28079630f393f9d not found: ID does not exist" Nov 25 15:31:08 crc kubenswrapper[4796]: I1125 15:31:08.050681 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "22e13480-3aaf-4df3-8c30-1cd8f2b33e55" (UID: "22e13480-3aaf-4df3-8c30-1cd8f2b33e55"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:31:08 crc kubenswrapper[4796]: I1125 15:31:08.098888 4796 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/22e13480-3aaf-4df3-8c30-1cd8f2b33e55-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:08 crc kubenswrapper[4796]: I1125 15:31:08.419494 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" path="/var/lib/kubelet/pods/22e13480-3aaf-4df3-8c30-1cd8f2b33e55/volumes" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.839781 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crfpn"] Nov 25 15:31:31 crc kubenswrapper[4796]: E1125 15:31:31.840844 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" containerName="copy" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.840860 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" containerName="copy" Nov 25 15:31:31 crc kubenswrapper[4796]: E1125 15:31:31.840874 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" containerName="gather" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.840880 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" containerName="gather" Nov 25 15:31:31 crc kubenswrapper[4796]: E1125 15:31:31.840894 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef2ae84-3e27-404e-ba31-d178c77eb69e" containerName="collect-profiles" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.840901 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef2ae84-3e27-404e-ba31-d178c77eb69e" containerName="collect-profiles" Nov 25 15:31:31 crc kubenswrapper[4796]: E1125 15:31:31.840911 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerName="extract-utilities" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.840919 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerName="extract-utilities" Nov 25 15:31:31 crc kubenswrapper[4796]: E1125 15:31:31.840935 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerName="extract-content" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.840942 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerName="extract-content" Nov 25 15:31:31 crc kubenswrapper[4796]: E1125 15:31:31.840958 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerName="registry-server" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.840964 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerName="registry-server" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.841148 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef2ae84-3e27-404e-ba31-d178c77eb69e" containerName="collect-profiles" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.841163 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" containerName="copy" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.841180 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e13480-3aaf-4df3-8c30-1cd8f2b33e55" containerName="gather" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.841192 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d7a454-7a02-478f-bbfe-a4421bd7d39f" containerName="registry-server" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.842564 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.851783 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crfpn"] Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.935842 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-catalog-content\") pod \"certified-operators-crfpn\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.935922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htvx7\" (UniqueName: \"kubernetes.io/projected/a1c31266-9678-4c15-b45d-ce36dfbd07d8-kube-api-access-htvx7\") pod \"certified-operators-crfpn\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:31 crc kubenswrapper[4796]: I1125 15:31:31.936013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-utilities\") pod \"certified-operators-crfpn\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:32 crc kubenswrapper[4796]: I1125 15:31:32.037420 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-utilities\") pod \"certified-operators-crfpn\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:32 crc kubenswrapper[4796]: I1125 15:31:32.037902 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-catalog-content\") pod \"certified-operators-crfpn\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:32 crc kubenswrapper[4796]: I1125 15:31:32.037940 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htvx7\" (UniqueName: \"kubernetes.io/projected/a1c31266-9678-4c15-b45d-ce36dfbd07d8-kube-api-access-htvx7\") pod \"certified-operators-crfpn\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:32 crc kubenswrapper[4796]: I1125 15:31:32.038116 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-utilities\") pod \"certified-operators-crfpn\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:32 crc kubenswrapper[4796]: I1125 15:31:32.038505 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-catalog-content\") pod \"certified-operators-crfpn\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:32 crc kubenswrapper[4796]: I1125 15:31:32.058841 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htvx7\" (UniqueName: \"kubernetes.io/projected/a1c31266-9678-4c15-b45d-ce36dfbd07d8-kube-api-access-htvx7\") pod \"certified-operators-crfpn\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:32 crc kubenswrapper[4796]: I1125 15:31:32.176072 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:32 crc kubenswrapper[4796]: I1125 15:31:32.683611 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crfpn"] Nov 25 15:31:33 crc kubenswrapper[4796]: I1125 15:31:33.201655 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerID="7f75d18f812da3357b28d12d9f2f9b211b2a361a28da627c5ceb39f9469bce31" exitCode=0 Nov 25 15:31:33 crc kubenswrapper[4796]: I1125 15:31:33.201730 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crfpn" event={"ID":"a1c31266-9678-4c15-b45d-ce36dfbd07d8","Type":"ContainerDied","Data":"7f75d18f812da3357b28d12d9f2f9b211b2a361a28da627c5ceb39f9469bce31"} Nov 25 15:31:33 crc kubenswrapper[4796]: I1125 15:31:33.201775 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crfpn" event={"ID":"a1c31266-9678-4c15-b45d-ce36dfbd07d8","Type":"ContainerStarted","Data":"9cc16313f3ded7bc695d0f113bb6a52fd61fe77d939158f7fed921c483ad3c9d"} Nov 25 15:31:33 crc kubenswrapper[4796]: I1125 15:31:33.203804 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:31:35 crc kubenswrapper[4796]: I1125 15:31:35.224229 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crfpn" event={"ID":"a1c31266-9678-4c15-b45d-ce36dfbd07d8","Type":"ContainerStarted","Data":"0db401a3ba5f24b005892e266470b8ddf4cd9a1b4ac2d797001e2dc200ca4b30"} Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.251966 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xg9q"] Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.255260 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.272607 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xg9q"] Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.332593 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-utilities\") pod \"redhat-marketplace-4xg9q\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.332690 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgh8p\" (UniqueName: \"kubernetes.io/projected/dd256e25-2097-4283-a785-df3a7d6fb955-kube-api-access-sgh8p\") pod \"redhat-marketplace-4xg9q\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.332791 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-catalog-content\") pod \"redhat-marketplace-4xg9q\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.435022 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-catalog-content\") pod \"redhat-marketplace-4xg9q\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.435148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-utilities\") pod \"redhat-marketplace-4xg9q\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.435222 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgh8p\" (UniqueName: \"kubernetes.io/projected/dd256e25-2097-4283-a785-df3a7d6fb955-kube-api-access-sgh8p\") pod \"redhat-marketplace-4xg9q\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.435827 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-catalog-content\") pod \"redhat-marketplace-4xg9q\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.435909 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-utilities\") pod \"redhat-marketplace-4xg9q\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.460328 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgh8p\" (UniqueName: \"kubernetes.io/projected/dd256e25-2097-4283-a785-df3a7d6fb955-kube-api-access-sgh8p\") pod \"redhat-marketplace-4xg9q\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:36 crc kubenswrapper[4796]: I1125 15:31:36.593702 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:37 crc kubenswrapper[4796]: I1125 15:31:37.088851 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xg9q"] Nov 25 15:31:37 crc kubenswrapper[4796]: W1125 15:31:37.091922 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd256e25_2097_4283_a785_df3a7d6fb955.slice/crio-e679fd8eddbb4a03d4587b9e39522dcb1721a555a619f92ee3ecac289a1d2ac8 WatchSource:0}: Error finding container e679fd8eddbb4a03d4587b9e39522dcb1721a555a619f92ee3ecac289a1d2ac8: Status 404 returned error can't find the container with id e679fd8eddbb4a03d4587b9e39522dcb1721a555a619f92ee3ecac289a1d2ac8 Nov 25 15:31:37 crc kubenswrapper[4796]: I1125 15:31:37.262377 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xg9q" event={"ID":"dd256e25-2097-4283-a785-df3a7d6fb955","Type":"ContainerStarted","Data":"e679fd8eddbb4a03d4587b9e39522dcb1721a555a619f92ee3ecac289a1d2ac8"} Nov 25 15:31:38 crc kubenswrapper[4796]: I1125 15:31:38.274512 4796 generic.go:334] "Generic (PLEG): container finished" podID="dd256e25-2097-4283-a785-df3a7d6fb955" containerID="89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a" exitCode=0 Nov 25 15:31:38 crc kubenswrapper[4796]: I1125 15:31:38.274564 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xg9q" event={"ID":"dd256e25-2097-4283-a785-df3a7d6fb955","Type":"ContainerDied","Data":"89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a"} Nov 25 15:31:38 crc kubenswrapper[4796]: I1125 15:31:38.280157 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerID="0db401a3ba5f24b005892e266470b8ddf4cd9a1b4ac2d797001e2dc200ca4b30" exitCode=0 Nov 25 15:31:38 crc kubenswrapper[4796]: I1125 15:31:38.280221 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crfpn" event={"ID":"a1c31266-9678-4c15-b45d-ce36dfbd07d8","Type":"ContainerDied","Data":"0db401a3ba5f24b005892e266470b8ddf4cd9a1b4ac2d797001e2dc200ca4b30"} Nov 25 15:31:39 crc kubenswrapper[4796]: I1125 15:31:39.294965 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crfpn" event={"ID":"a1c31266-9678-4c15-b45d-ce36dfbd07d8","Type":"ContainerStarted","Data":"d104091ebb023c26d32349dc0b5b44d5a30cf2859fe61a0c91329e9f9e0108ea"} Nov 25 15:31:39 crc kubenswrapper[4796]: I1125 15:31:39.299228 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xg9q" event={"ID":"dd256e25-2097-4283-a785-df3a7d6fb955","Type":"ContainerStarted","Data":"7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030"} Nov 25 15:31:39 crc kubenswrapper[4796]: I1125 15:31:39.347997 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-crfpn" podStartSLOduration=2.6116546720000002 podStartE2EDuration="8.347979266s" podCreationTimestamp="2025-11-25 15:31:31 +0000 UTC" firstStartedPulling="2025-11-25 15:31:33.20355338 +0000 UTC m=+4021.546662804" lastFinishedPulling="2025-11-25 15:31:38.939877984 +0000 UTC m=+4027.282987398" observedRunningTime="2025-11-25 15:31:39.322169929 +0000 UTC m=+4027.665279363" watchObservedRunningTime="2025-11-25 15:31:39.347979266 +0000 UTC m=+4027.691088690" Nov 25 15:31:40 crc kubenswrapper[4796]: I1125 15:31:40.314436 4796 generic.go:334] "Generic (PLEG): container finished" podID="dd256e25-2097-4283-a785-df3a7d6fb955" containerID="7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030" exitCode=0 Nov 25 15:31:40 crc kubenswrapper[4796]: I1125 15:31:40.314485 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xg9q" event={"ID":"dd256e25-2097-4283-a785-df3a7d6fb955","Type":"ContainerDied","Data":"7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030"} Nov 25 15:31:42 crc kubenswrapper[4796]: I1125 15:31:42.176313 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:42 crc kubenswrapper[4796]: I1125 15:31:42.177304 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:42 crc kubenswrapper[4796]: I1125 15:31:42.228828 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:42 crc kubenswrapper[4796]: I1125 15:31:42.334503 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xg9q" event={"ID":"dd256e25-2097-4283-a785-df3a7d6fb955","Type":"ContainerStarted","Data":"1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890"} Nov 25 15:31:42 crc kubenswrapper[4796]: I1125 15:31:42.363118 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xg9q" podStartSLOduration=3.713615209 podStartE2EDuration="6.363095223s" podCreationTimestamp="2025-11-25 15:31:36 +0000 UTC" firstStartedPulling="2025-11-25 15:31:38.276753086 +0000 UTC m=+4026.619862510" lastFinishedPulling="2025-11-25 15:31:40.92623306 +0000 UTC m=+4029.269342524" observedRunningTime="2025-11-25 15:31:42.349710005 +0000 UTC m=+4030.692819449" watchObservedRunningTime="2025-11-25 15:31:42.363095223 +0000 UTC m=+4030.706204657" Nov 25 15:31:46 crc kubenswrapper[4796]: I1125 15:31:46.594010 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:46 crc kubenswrapper[4796]: I1125 15:31:46.594602 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:46 crc kubenswrapper[4796]: I1125 15:31:46.644308 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:47 crc kubenswrapper[4796]: I1125 15:31:47.449213 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:47 crc kubenswrapper[4796]: I1125 15:31:47.509418 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xg9q"] Nov 25 15:31:49 crc kubenswrapper[4796]: I1125 15:31:49.414136 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xg9q" podUID="dd256e25-2097-4283-a785-df3a7d6fb955" containerName="registry-server" containerID="cri-o://1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890" gracePeriod=2 Nov 25 15:31:49 crc kubenswrapper[4796]: I1125 15:31:49.898005 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.007123 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-utilities\") pod \"dd256e25-2097-4283-a785-df3a7d6fb955\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.007491 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-catalog-content\") pod \"dd256e25-2097-4283-a785-df3a7d6fb955\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.007560 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgh8p\" (UniqueName: \"kubernetes.io/projected/dd256e25-2097-4283-a785-df3a7d6fb955-kube-api-access-sgh8p\") pod \"dd256e25-2097-4283-a785-df3a7d6fb955\" (UID: \"dd256e25-2097-4283-a785-df3a7d6fb955\") " Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.008109 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-utilities" (OuterVolumeSpecName: "utilities") pod "dd256e25-2097-4283-a785-df3a7d6fb955" (UID: "dd256e25-2097-4283-a785-df3a7d6fb955"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.013366 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd256e25-2097-4283-a785-df3a7d6fb955-kube-api-access-sgh8p" (OuterVolumeSpecName: "kube-api-access-sgh8p") pod "dd256e25-2097-4283-a785-df3a7d6fb955" (UID: "dd256e25-2097-4283-a785-df3a7d6fb955"). InnerVolumeSpecName "kube-api-access-sgh8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.030367 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd256e25-2097-4283-a785-df3a7d6fb955" (UID: "dd256e25-2097-4283-a785-df3a7d6fb955"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.110238 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.110288 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgh8p\" (UniqueName: \"kubernetes.io/projected/dd256e25-2097-4283-a785-df3a7d6fb955-kube-api-access-sgh8p\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.110299 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd256e25-2097-4283-a785-df3a7d6fb955-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.428481 4796 generic.go:334] "Generic (PLEG): container finished" podID="dd256e25-2097-4283-a785-df3a7d6fb955" containerID="1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890" exitCode=0 Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.428838 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xg9q" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.428934 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xg9q" event={"ID":"dd256e25-2097-4283-a785-df3a7d6fb955","Type":"ContainerDied","Data":"1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890"} Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.429260 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xg9q" event={"ID":"dd256e25-2097-4283-a785-df3a7d6fb955","Type":"ContainerDied","Data":"e679fd8eddbb4a03d4587b9e39522dcb1721a555a619f92ee3ecac289a1d2ac8"} Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.429300 4796 scope.go:117] "RemoveContainer" containerID="1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.457781 4796 scope.go:117] "RemoveContainer" containerID="7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.482230 4796 scope.go:117] "RemoveContainer" containerID="89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.483742 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xg9q"] Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.494565 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xg9q"] Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.527680 4796 scope.go:117] "RemoveContainer" containerID="1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890" Nov 25 15:31:50 crc kubenswrapper[4796]: E1125 15:31:50.529035 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890\": container with ID starting with 1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890 not found: ID does not exist" containerID="1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.529087 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890"} err="failed to get container status \"1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890\": rpc error: code = NotFound desc = could not find container \"1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890\": container with ID starting with 1594bc03ae68aabc248e834629a0abbfe38db04f273d939ab3a96c117bf02890 not found: ID does not exist" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.529116 4796 scope.go:117] "RemoveContainer" containerID="7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030" Nov 25 15:31:50 crc kubenswrapper[4796]: E1125 15:31:50.531946 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030\": container with ID starting with 7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030 not found: ID does not exist" containerID="7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.532001 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030"} err="failed to get container status \"7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030\": rpc error: code = NotFound desc = could not find container \"7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030\": container with ID starting with 7b017c6e20e0cc07488be77f27fc1d1ef61232e07f109ca161deb8cd7df94030 not found: ID does not exist" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.532035 4796 scope.go:117] "RemoveContainer" containerID="89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a" Nov 25 15:31:50 crc kubenswrapper[4796]: E1125 15:31:50.532325 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a\": container with ID starting with 89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a not found: ID does not exist" containerID="89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a" Nov 25 15:31:50 crc kubenswrapper[4796]: I1125 15:31:50.532349 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a"} err="failed to get container status \"89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a\": rpc error: code = NotFound desc = could not find container \"89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a\": container with ID starting with 89d8738980e07050df5e11a30f58e69d37f57d184c634d292a5b1291d779a22a not found: ID does not exist" Nov 25 15:31:52 crc kubenswrapper[4796]: I1125 15:31:52.436498 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd256e25-2097-4283-a785-df3a7d6fb955" path="/var/lib/kubelet/pods/dd256e25-2097-4283-a785-df3a7d6fb955/volumes" Nov 25 15:31:52 crc kubenswrapper[4796]: I1125 15:31:52.632178 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:53 crc kubenswrapper[4796]: I1125 15:31:53.624480 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crfpn"] Nov 25 15:31:53 crc kubenswrapper[4796]: I1125 15:31:53.624863 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-crfpn" podUID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerName="registry-server" containerID="cri-o://d104091ebb023c26d32349dc0b5b44d5a30cf2859fe61a0c91329e9f9e0108ea" gracePeriod=2 Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.487001 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerID="d104091ebb023c26d32349dc0b5b44d5a30cf2859fe61a0c91329e9f9e0108ea" exitCode=0 Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.487088 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crfpn" event={"ID":"a1c31266-9678-4c15-b45d-ce36dfbd07d8","Type":"ContainerDied","Data":"d104091ebb023c26d32349dc0b5b44d5a30cf2859fe61a0c91329e9f9e0108ea"} Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.682255 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.804055 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-utilities\") pod \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.804151 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-catalog-content\") pod \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.804296 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htvx7\" (UniqueName: \"kubernetes.io/projected/a1c31266-9678-4c15-b45d-ce36dfbd07d8-kube-api-access-htvx7\") pod \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\" (UID: \"a1c31266-9678-4c15-b45d-ce36dfbd07d8\") " Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.805512 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-utilities" (OuterVolumeSpecName: "utilities") pod "a1c31266-9678-4c15-b45d-ce36dfbd07d8" (UID: "a1c31266-9678-4c15-b45d-ce36dfbd07d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.806386 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.812075 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c31266-9678-4c15-b45d-ce36dfbd07d8-kube-api-access-htvx7" (OuterVolumeSpecName: "kube-api-access-htvx7") pod "a1c31266-9678-4c15-b45d-ce36dfbd07d8" (UID: "a1c31266-9678-4c15-b45d-ce36dfbd07d8"). InnerVolumeSpecName "kube-api-access-htvx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.869023 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1c31266-9678-4c15-b45d-ce36dfbd07d8" (UID: "a1c31266-9678-4c15-b45d-ce36dfbd07d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.908714 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c31266-9678-4c15-b45d-ce36dfbd07d8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:54 crc kubenswrapper[4796]: I1125 15:31:54.908756 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htvx7\" (UniqueName: \"kubernetes.io/projected/a1c31266-9678-4c15-b45d-ce36dfbd07d8-kube-api-access-htvx7\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:55 crc kubenswrapper[4796]: I1125 15:31:55.496942 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crfpn" event={"ID":"a1c31266-9678-4c15-b45d-ce36dfbd07d8","Type":"ContainerDied","Data":"9cc16313f3ded7bc695d0f113bb6a52fd61fe77d939158f7fed921c483ad3c9d"} Nov 25 15:31:55 crc kubenswrapper[4796]: I1125 15:31:55.497003 4796 scope.go:117] "RemoveContainer" containerID="d104091ebb023c26d32349dc0b5b44d5a30cf2859fe61a0c91329e9f9e0108ea" Nov 25 15:31:55 crc kubenswrapper[4796]: I1125 15:31:55.497015 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crfpn" Nov 25 15:31:55 crc kubenswrapper[4796]: I1125 15:31:55.531500 4796 scope.go:117] "RemoveContainer" containerID="0db401a3ba5f24b005892e266470b8ddf4cd9a1b4ac2d797001e2dc200ca4b30" Nov 25 15:31:55 crc kubenswrapper[4796]: I1125 15:31:55.539558 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crfpn"] Nov 25 15:31:55 crc kubenswrapper[4796]: I1125 15:31:55.554078 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-crfpn"] Nov 25 15:31:55 crc kubenswrapper[4796]: I1125 15:31:55.570651 4796 scope.go:117] "RemoveContainer" containerID="7f75d18f812da3357b28d12d9f2f9b211b2a361a28da627c5ceb39f9469bce31" Nov 25 15:31:56 crc kubenswrapper[4796]: I1125 15:31:56.429445 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" path="/var/lib/kubelet/pods/a1c31266-9678-4c15-b45d-ce36dfbd07d8/volumes" Nov 25 15:32:49 crc kubenswrapper[4796]: I1125 15:32:49.460770 4796 scope.go:117] "RemoveContainer" containerID="23328013a085e6f26fa2786aee1da4387a9d1d170ef46f62095cc26ea9cc9cca" Nov 25 15:32:49 crc kubenswrapper[4796]: I1125 15:32:49.491040 4796 scope.go:117] "RemoveContainer" containerID="f682ac403121b2aba75bed1c4e18af69bf163d144b0d98c8e0f99754c1ee0da4" Nov 25 15:32:49 crc kubenswrapper[4796]: I1125 15:32:49.514408 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:32:49 crc kubenswrapper[4796]: I1125 15:32:49.514491 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:33:19 crc kubenswrapper[4796]: I1125 15:33:19.514294 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:33:19 crc kubenswrapper[4796]: I1125 15:33:19.514967 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.696950 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j2hr4/must-gather-j88d4"] Nov 25 15:33:44 crc kubenswrapper[4796]: E1125 15:33:44.697926 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd256e25-2097-4283-a785-df3a7d6fb955" containerName="extract-content" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.697944 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd256e25-2097-4283-a785-df3a7d6fb955" containerName="extract-content" Nov 25 15:33:44 crc kubenswrapper[4796]: E1125 15:33:44.697972 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd256e25-2097-4283-a785-df3a7d6fb955" containerName="registry-server" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.697980 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd256e25-2097-4283-a785-df3a7d6fb955" containerName="registry-server" Nov 25 15:33:44 crc kubenswrapper[4796]: E1125 15:33:44.697994 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerName="extract-content" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.698001 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerName="extract-content" Nov 25 15:33:44 crc kubenswrapper[4796]: E1125 15:33:44.698023 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerName="registry-server" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.698030 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerName="registry-server" Nov 25 15:33:44 crc kubenswrapper[4796]: E1125 15:33:44.698049 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerName="extract-utilities" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.698058 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerName="extract-utilities" Nov 25 15:33:44 crc kubenswrapper[4796]: E1125 15:33:44.698078 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd256e25-2097-4283-a785-df3a7d6fb955" containerName="extract-utilities" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.698085 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd256e25-2097-4283-a785-df3a7d6fb955" containerName="extract-utilities" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.698314 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd256e25-2097-4283-a785-df3a7d6fb955" containerName="registry-server" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.698345 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c31266-9678-4c15-b45d-ce36dfbd07d8" containerName="registry-server" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.699384 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.701221 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-j2hr4"/"kube-root-ca.crt" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.701669 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-j2hr4"/"default-dockercfg-zhvcc" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.703869 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-j2hr4"/"openshift-service-ca.crt" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.709662 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-j2hr4/must-gather-j88d4"] Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.825189 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-must-gather-output\") pod \"must-gather-j88d4\" (UID: \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\") " pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.825243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp86g\" (UniqueName: \"kubernetes.io/projected/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-kube-api-access-cp86g\") pod \"must-gather-j88d4\" (UID: \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\") " pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.927621 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-must-gather-output\") pod \"must-gather-j88d4\" (UID: \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\") " pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.927729 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp86g\" (UniqueName: \"kubernetes.io/projected/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-kube-api-access-cp86g\") pod \"must-gather-j88d4\" (UID: \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\") " pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.928127 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-must-gather-output\") pod \"must-gather-j88d4\" (UID: \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\") " pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:33:44 crc kubenswrapper[4796]: I1125 15:33:44.945297 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp86g\" (UniqueName: \"kubernetes.io/projected/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-kube-api-access-cp86g\") pod \"must-gather-j88d4\" (UID: \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\") " pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:33:45 crc kubenswrapper[4796]: I1125 15:33:45.027968 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:33:45 crc kubenswrapper[4796]: W1125 15:33:45.509220 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4d5bc49_b766_4225_be3b_b2cc78c22ae9.slice/crio-c2059e46a6fe6148a9615368363a928dd9db6ef95806d13c195c0e908042e6e8 WatchSource:0}: Error finding container c2059e46a6fe6148a9615368363a928dd9db6ef95806d13c195c0e908042e6e8: Status 404 returned error can't find the container with id c2059e46a6fe6148a9615368363a928dd9db6ef95806d13c195c0e908042e6e8 Nov 25 15:33:45 crc kubenswrapper[4796]: I1125 15:33:45.516772 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-j2hr4/must-gather-j88d4"] Nov 25 15:33:45 crc kubenswrapper[4796]: I1125 15:33:45.631186 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/must-gather-j88d4" event={"ID":"d4d5bc49-b766-4225-be3b-b2cc78c22ae9","Type":"ContainerStarted","Data":"c2059e46a6fe6148a9615368363a928dd9db6ef95806d13c195c0e908042e6e8"} Nov 25 15:33:46 crc kubenswrapper[4796]: I1125 15:33:46.641797 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/must-gather-j88d4" event={"ID":"d4d5bc49-b766-4225-be3b-b2cc78c22ae9","Type":"ContainerStarted","Data":"0d1c5b69a9120ff5c5caabd0c03916eb46abe06bd2b6f328ca1313704a61f118"} Nov 25 15:33:46 crc kubenswrapper[4796]: I1125 15:33:46.641839 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/must-gather-j88d4" event={"ID":"d4d5bc49-b766-4225-be3b-b2cc78c22ae9","Type":"ContainerStarted","Data":"690344763ce68ff6d3fff3f98e53903915a239f2ca89b7a75d2a2f2545c8b483"} Nov 25 15:33:46 crc kubenswrapper[4796]: I1125 15:33:46.663745 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j2hr4/must-gather-j88d4" podStartSLOduration=2.663728569 podStartE2EDuration="2.663728569s" podCreationTimestamp="2025-11-25 15:33:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:33:46.661846261 +0000 UTC m=+4155.004955685" watchObservedRunningTime="2025-11-25 15:33:46.663728569 +0000 UTC m=+4155.006837983" Nov 25 15:33:48 crc kubenswrapper[4796]: E1125 15:33:48.854678 4796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.227:58556->38.102.83.227:35215: write tcp 38.102.83.227:58556->38.102.83.227:35215: write: broken pipe Nov 25 15:33:49 crc kubenswrapper[4796]: E1125 15:33:49.042683 4796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.227:58582->38.102.83.227:35215: write tcp 38.102.83.227:58582->38.102.83.227:35215: write: broken pipe Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.513867 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.513938 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.513987 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.514980 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab149f04ee33eb6fa179e2fae0783da6b2b9681d3eecf03b9d858d176b0d61b6"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.515072 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://ab149f04ee33eb6fa179e2fae0783da6b2b9681d3eecf03b9d858d176b0d61b6" gracePeriod=600 Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.655010 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j2hr4/crc-debug-fz778"] Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.656527 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.683334 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="ab149f04ee33eb6fa179e2fae0783da6b2b9681d3eecf03b9d858d176b0d61b6" exitCode=0 Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.683374 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"ab149f04ee33eb6fa179e2fae0783da6b2b9681d3eecf03b9d858d176b0d61b6"} Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.683406 4796 scope.go:117] "RemoveContainer" containerID="397565aca118e56e677cf508602e83c93631eac42773c311aa0bdd06fac3feee" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.722348 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-host\") pod \"crc-debug-fz778\" (UID: \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\") " pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.722465 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48j5c\" (UniqueName: \"kubernetes.io/projected/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-kube-api-access-48j5c\") pod \"crc-debug-fz778\" (UID: \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\") " pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.824616 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-host\") pod \"crc-debug-fz778\" (UID: \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\") " pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.824704 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-host\") pod \"crc-debug-fz778\" (UID: \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\") " pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.824730 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48j5c\" (UniqueName: \"kubernetes.io/projected/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-kube-api-access-48j5c\") pod \"crc-debug-fz778\" (UID: \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\") " pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:33:49 crc kubenswrapper[4796]: I1125 15:33:49.855174 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48j5c\" (UniqueName: \"kubernetes.io/projected/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-kube-api-access-48j5c\") pod \"crc-debug-fz778\" (UID: \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\") " pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:33:50 crc kubenswrapper[4796]: I1125 15:33:50.009039 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:33:50 crc kubenswrapper[4796]: I1125 15:33:50.699104 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927"} Nov 25 15:33:50 crc kubenswrapper[4796]: I1125 15:33:50.709142 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/crc-debug-fz778" event={"ID":"eae7dcd9-6be5-4fd1-90d2-7047331c3d96","Type":"ContainerStarted","Data":"06b1703bf6ef152b17f8d838cb22996569383b59944bc00799a61e2d5ddf9850"} Nov 25 15:33:50 crc kubenswrapper[4796]: I1125 15:33:50.709451 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/crc-debug-fz778" event={"ID":"eae7dcd9-6be5-4fd1-90d2-7047331c3d96","Type":"ContainerStarted","Data":"b66d0658580497d08ad4a15247cebd9f57427125a7da9830586ebfc7a6e4bf4e"} Nov 25 15:33:50 crc kubenswrapper[4796]: I1125 15:33:50.742745 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-j2hr4/crc-debug-fz778" podStartSLOduration=1.742724066 podStartE2EDuration="1.742724066s" podCreationTimestamp="2025-11-25 15:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:33:50.726455957 +0000 UTC m=+4159.069565391" watchObservedRunningTime="2025-11-25 15:33:50.742724066 +0000 UTC m=+4159.085833490" Nov 25 15:34:25 crc kubenswrapper[4796]: I1125 15:34:25.030021 4796 generic.go:334] "Generic (PLEG): container finished" podID="eae7dcd9-6be5-4fd1-90d2-7047331c3d96" containerID="06b1703bf6ef152b17f8d838cb22996569383b59944bc00799a61e2d5ddf9850" exitCode=0 Nov 25 15:34:25 crc kubenswrapper[4796]: I1125 15:34:25.030129 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/crc-debug-fz778" event={"ID":"eae7dcd9-6be5-4fd1-90d2-7047331c3d96","Type":"ContainerDied","Data":"06b1703bf6ef152b17f8d838cb22996569383b59944bc00799a61e2d5ddf9850"} Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.170042 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.211375 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j2hr4/crc-debug-fz778"] Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.223936 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j2hr4/crc-debug-fz778"] Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.245391 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48j5c\" (UniqueName: \"kubernetes.io/projected/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-kube-api-access-48j5c\") pod \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\" (UID: \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\") " Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.245556 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-host\") pod \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\" (UID: \"eae7dcd9-6be5-4fd1-90d2-7047331c3d96\") " Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.245754 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-host" (OuterVolumeSpecName: "host") pod "eae7dcd9-6be5-4fd1-90d2-7047331c3d96" (UID: "eae7dcd9-6be5-4fd1-90d2-7047331c3d96"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.246685 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-host\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.255235 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-kube-api-access-48j5c" (OuterVolumeSpecName: "kube-api-access-48j5c") pod "eae7dcd9-6be5-4fd1-90d2-7047331c3d96" (UID: "eae7dcd9-6be5-4fd1-90d2-7047331c3d96"). InnerVolumeSpecName "kube-api-access-48j5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.349001 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48j5c\" (UniqueName: \"kubernetes.io/projected/eae7dcd9-6be5-4fd1-90d2-7047331c3d96-kube-api-access-48j5c\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:26 crc kubenswrapper[4796]: I1125 15:34:26.422176 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae7dcd9-6be5-4fd1-90d2-7047331c3d96" path="/var/lib/kubelet/pods/eae7dcd9-6be5-4fd1-90d2-7047331c3d96/volumes" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.053867 4796 scope.go:117] "RemoveContainer" containerID="06b1703bf6ef152b17f8d838cb22996569383b59944bc00799a61e2d5ddf9850" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.053920 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-fz778" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.387132 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j2hr4/crc-debug-smd22"] Nov 25 15:34:27 crc kubenswrapper[4796]: E1125 15:34:27.387794 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae7dcd9-6be5-4fd1-90d2-7047331c3d96" containerName="container-00" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.387814 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae7dcd9-6be5-4fd1-90d2-7047331c3d96" containerName="container-00" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.388082 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae7dcd9-6be5-4fd1-90d2-7047331c3d96" containerName="container-00" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.388997 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.471621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-host\") pod \"crc-debug-smd22\" (UID: \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\") " pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.471799 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmsp4\" (UniqueName: \"kubernetes.io/projected/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-kube-api-access-vmsp4\") pod \"crc-debug-smd22\" (UID: \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\") " pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.574362 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-host\") pod \"crc-debug-smd22\" (UID: \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\") " pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.574503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-host\") pod \"crc-debug-smd22\" (UID: \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\") " pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.574552 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmsp4\" (UniqueName: \"kubernetes.io/projected/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-kube-api-access-vmsp4\") pod \"crc-debug-smd22\" (UID: \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\") " pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.602247 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmsp4\" (UniqueName: \"kubernetes.io/projected/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-kube-api-access-vmsp4\") pod \"crc-debug-smd22\" (UID: \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\") " pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:27 crc kubenswrapper[4796]: I1125 15:34:27.708827 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:28 crc kubenswrapper[4796]: I1125 15:34:28.065166 4796 generic.go:334] "Generic (PLEG): container finished" podID="80481045-d9ca-4f5c-a9d4-2b68aa5a1d26" containerID="67469e417b6291b0f3a3f4248ffed99ecc29d5c4b9d4357e0acc47194563cb7c" exitCode=0 Nov 25 15:34:28 crc kubenswrapper[4796]: I1125 15:34:28.065288 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/crc-debug-smd22" event={"ID":"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26","Type":"ContainerDied","Data":"67469e417b6291b0f3a3f4248ffed99ecc29d5c4b9d4357e0acc47194563cb7c"} Nov 25 15:34:28 crc kubenswrapper[4796]: I1125 15:34:28.065638 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/crc-debug-smd22" event={"ID":"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26","Type":"ContainerStarted","Data":"f8642077afb134e15aab9721c4c097ea821fd0e7b041094450eb0b6201912ddb"} Nov 25 15:34:28 crc kubenswrapper[4796]: I1125 15:34:28.592646 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j2hr4/crc-debug-smd22"] Nov 25 15:34:28 crc kubenswrapper[4796]: I1125 15:34:28.605061 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j2hr4/crc-debug-smd22"] Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.189226 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.202774 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmsp4\" (UniqueName: \"kubernetes.io/projected/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-kube-api-access-vmsp4\") pod \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\" (UID: \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\") " Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.203036 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-host\") pod \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\" (UID: \"80481045-d9ca-4f5c-a9d4-2b68aa5a1d26\") " Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.203139 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-host" (OuterVolumeSpecName: "host") pod "80481045-d9ca-4f5c-a9d4-2b68aa5a1d26" (UID: "80481045-d9ca-4f5c-a9d4-2b68aa5a1d26"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.203774 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-host\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.207980 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-kube-api-access-vmsp4" (OuterVolumeSpecName: "kube-api-access-vmsp4") pod "80481045-d9ca-4f5c-a9d4-2b68aa5a1d26" (UID: "80481045-d9ca-4f5c-a9d4-2b68aa5a1d26"). InnerVolumeSpecName "kube-api-access-vmsp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.305196 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmsp4\" (UniqueName: \"kubernetes.io/projected/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26-kube-api-access-vmsp4\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.764635 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-j2hr4/crc-debug-qltzt"] Nov 25 15:34:29 crc kubenswrapper[4796]: E1125 15:34:29.765291 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80481045-d9ca-4f5c-a9d4-2b68aa5a1d26" containerName="container-00" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.765303 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="80481045-d9ca-4f5c-a9d4-2b68aa5a1d26" containerName="container-00" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.765473 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="80481045-d9ca-4f5c-a9d4-2b68aa5a1d26" containerName="container-00" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.766274 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.914819 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bpkt\" (UniqueName: \"kubernetes.io/projected/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-kube-api-access-9bpkt\") pod \"crc-debug-qltzt\" (UID: \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\") " pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:29 crc kubenswrapper[4796]: I1125 15:34:29.914937 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-host\") pod \"crc-debug-qltzt\" (UID: \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\") " pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:30 crc kubenswrapper[4796]: I1125 15:34:30.017704 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bpkt\" (UniqueName: \"kubernetes.io/projected/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-kube-api-access-9bpkt\") pod \"crc-debug-qltzt\" (UID: \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\") " pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:30 crc kubenswrapper[4796]: I1125 15:34:30.017793 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-host\") pod \"crc-debug-qltzt\" (UID: \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\") " pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:30 crc kubenswrapper[4796]: I1125 15:34:30.017937 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-host\") pod \"crc-debug-qltzt\" (UID: \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\") " pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:30 crc kubenswrapper[4796]: I1125 15:34:30.080822 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bpkt\" (UniqueName: \"kubernetes.io/projected/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-kube-api-access-9bpkt\") pod \"crc-debug-qltzt\" (UID: \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\") " pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:30 crc kubenswrapper[4796]: I1125 15:34:30.083952 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:30 crc kubenswrapper[4796]: I1125 15:34:30.102530 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8642077afb134e15aab9721c4c097ea821fd0e7b041094450eb0b6201912ddb" Nov 25 15:34:30 crc kubenswrapper[4796]: I1125 15:34:30.102655 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-smd22" Nov 25 15:34:30 crc kubenswrapper[4796]: W1125 15:34:30.131894 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d3d50e0_dfd8_4128_a799_61a9dc8600e2.slice/crio-f26eb4eb39dbcd94ed9d102bf8c10342fa050d95623b950321a0574f1b110fe1 WatchSource:0}: Error finding container f26eb4eb39dbcd94ed9d102bf8c10342fa050d95623b950321a0574f1b110fe1: Status 404 returned error can't find the container with id f26eb4eb39dbcd94ed9d102bf8c10342fa050d95623b950321a0574f1b110fe1 Nov 25 15:34:30 crc kubenswrapper[4796]: I1125 15:34:30.421524 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80481045-d9ca-4f5c-a9d4-2b68aa5a1d26" path="/var/lib/kubelet/pods/80481045-d9ca-4f5c-a9d4-2b68aa5a1d26/volumes" Nov 25 15:34:31 crc kubenswrapper[4796]: I1125 15:34:31.112122 4796 generic.go:334] "Generic (PLEG): container finished" podID="3d3d50e0-dfd8-4128-a799-61a9dc8600e2" containerID="fb2aa68219d2a54ec7a431c5b90d74e28cb08d2d4766e8dd62fbe456c72bfd05" exitCode=0 Nov 25 15:34:31 crc kubenswrapper[4796]: I1125 15:34:31.112168 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/crc-debug-qltzt" event={"ID":"3d3d50e0-dfd8-4128-a799-61a9dc8600e2","Type":"ContainerDied","Data":"fb2aa68219d2a54ec7a431c5b90d74e28cb08d2d4766e8dd62fbe456c72bfd05"} Nov 25 15:34:31 crc kubenswrapper[4796]: I1125 15:34:31.112199 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/crc-debug-qltzt" event={"ID":"3d3d50e0-dfd8-4128-a799-61a9dc8600e2","Type":"ContainerStarted","Data":"f26eb4eb39dbcd94ed9d102bf8c10342fa050d95623b950321a0574f1b110fe1"} Nov 25 15:34:31 crc kubenswrapper[4796]: I1125 15:34:31.157487 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j2hr4/crc-debug-qltzt"] Nov 25 15:34:31 crc kubenswrapper[4796]: I1125 15:34:31.168125 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j2hr4/crc-debug-qltzt"] Nov 25 15:34:32 crc kubenswrapper[4796]: I1125 15:34:32.366702 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:32 crc kubenswrapper[4796]: I1125 15:34:32.466508 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bpkt\" (UniqueName: \"kubernetes.io/projected/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-kube-api-access-9bpkt\") pod \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\" (UID: \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\") " Nov 25 15:34:32 crc kubenswrapper[4796]: I1125 15:34:32.466700 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-host\") pod \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\" (UID: \"3d3d50e0-dfd8-4128-a799-61a9dc8600e2\") " Nov 25 15:34:32 crc kubenswrapper[4796]: I1125 15:34:32.468277 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-host" (OuterVolumeSpecName: "host") pod "3d3d50e0-dfd8-4128-a799-61a9dc8600e2" (UID: "3d3d50e0-dfd8-4128-a799-61a9dc8600e2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:34:32 crc kubenswrapper[4796]: I1125 15:34:32.473414 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-kube-api-access-9bpkt" (OuterVolumeSpecName: "kube-api-access-9bpkt") pod "3d3d50e0-dfd8-4128-a799-61a9dc8600e2" (UID: "3d3d50e0-dfd8-4128-a799-61a9dc8600e2"). InnerVolumeSpecName "kube-api-access-9bpkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:34:32 crc kubenswrapper[4796]: I1125 15:34:32.568219 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bpkt\" (UniqueName: \"kubernetes.io/projected/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-kube-api-access-9bpkt\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:32 crc kubenswrapper[4796]: I1125 15:34:32.568254 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d3d50e0-dfd8-4128-a799-61a9dc8600e2-host\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:33 crc kubenswrapper[4796]: I1125 15:34:33.133119 4796 scope.go:117] "RemoveContainer" containerID="fb2aa68219d2a54ec7a431c5b90d74e28cb08d2d4766e8dd62fbe456c72bfd05" Nov 25 15:34:33 crc kubenswrapper[4796]: I1125 15:34:33.133167 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/crc-debug-qltzt" Nov 25 15:34:34 crc kubenswrapper[4796]: I1125 15:34:34.422209 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3d50e0-dfd8-4128-a799-61a9dc8600e2" path="/var/lib/kubelet/pods/3d3d50e0-dfd8-4128-a799-61a9dc8600e2/volumes" Nov 25 15:34:55 crc kubenswrapper[4796]: I1125 15:34:55.775354 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-648cbfbf74-5bhgn_f31c41f3-602c-427d-8728-9368c92a8d35/barbican-api/0.log" Nov 25 15:34:55 crc kubenswrapper[4796]: I1125 15:34:55.969280 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-648cbfbf74-5bhgn_f31c41f3-602c-427d-8728-9368c92a8d35/barbican-api-log/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.070104 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-696c6c8f78-kwfxh_71e86788-aa18-413b-aaa7-f216ef8d4f2b/barbican-keystone-listener/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.136200 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-696c6c8f78-kwfxh_71e86788-aa18-413b-aaa7-f216ef8d4f2b/barbican-keystone-listener-log/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.219029 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-847768d9dc-hdkcj_c2ea5acd-889d-439f-9295-39424d08c923/barbican-worker/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.297755 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-847768d9dc-hdkcj_c2ea5acd-889d-439f-9295-39424d08c923/barbican-worker-log/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.488332 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5kw94_e06f3673-5956-425d-aefa-270976a3804d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.609929 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_37724a0c-3784-401a-8214-3dcb37d2ce4f/ceilometer-notification-agent/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.615529 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_37724a0c-3784-401a-8214-3dcb37d2ce4f/ceilometer-central-agent/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.692984 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_37724a0c-3784-401a-8214-3dcb37d2ce4f/sg-core/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.697257 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_37724a0c-3784-401a-8214-3dcb37d2ce4f/proxy-httpd/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.890461 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_213ec08a-1b84-45bb-a867-7f077f18c908/cinder-api/0.log" Nov 25 15:34:56 crc kubenswrapper[4796]: I1125 15:34:56.896518 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_213ec08a-1b84-45bb-a867-7f077f18c908/cinder-api-log/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.115973 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7ac2f3b3-e1cc-4536-b6b3-eacb46b887db/cinder-scheduler/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.137569 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7ac2f3b3-e1cc-4536-b6b3-eacb46b887db/probe/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.213183 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8ktgh_cb697a58-06f8-4133-bb60-109f14009dad/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.327431 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7x58g_7ee7821f-7c42-4833-bdda-e32b06b2e1b8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.407301 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-tjxqx_64408db4-ea13-40ee-b40d-ce6e489f2b82/init/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.550155 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-tjxqx_64408db4-ea13-40ee-b40d-ce6e489f2b82/init/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.601751 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-tjxqx_64408db4-ea13-40ee-b40d-ce6e489f2b82/dnsmasq-dns/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.664320 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-h678c_9c76afe2-174a-4c31-a551-101661ae546b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.939209 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbf103ff-9a5b-408b-b69a-9383d471a83a/glance-log/0.log" Nov 25 15:34:57 crc kubenswrapper[4796]: I1125 15:34:57.964625 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cbf103ff-9a5b-408b-b69a-9383d471a83a/glance-httpd/0.log" Nov 25 15:34:58 crc kubenswrapper[4796]: I1125 15:34:58.090441 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_498b441d-79fc-4fa9-b857-72cf2f022ec9/glance-httpd/0.log" Nov 25 15:34:58 crc kubenswrapper[4796]: I1125 15:34:58.107901 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_498b441d-79fc-4fa9-b857-72cf2f022ec9/glance-log/0.log" Nov 25 15:34:58 crc kubenswrapper[4796]: I1125 15:34:58.412735 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-674489f5b-nnl97_b8f52433-dd17-499e-8ac4-bda250a52460/horizon/0.log" Nov 25 15:34:58 crc kubenswrapper[4796]: I1125 15:34:58.465469 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-p7rp8_552fef9f-5b94-4e45-9765-5b5e6ee62bfa/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:34:58 crc kubenswrapper[4796]: I1125 15:34:58.604406 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-674489f5b-nnl97_b8f52433-dd17-499e-8ac4-bda250a52460/horizon-log/0.log" Nov 25 15:34:58 crc kubenswrapper[4796]: I1125 15:34:58.671809 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-wvt59_e7c0033b-a387-447e-89cf-43e3a0f237d0/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:34:58 crc kubenswrapper[4796]: I1125 15:34:58.885788 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401381-s9lbp_72d4d931-5b18-49ad-a427-9997259fc320/keystone-cron/0.log" Nov 25 15:34:58 crc kubenswrapper[4796]: I1125 15:34:58.965459 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5d994c97d7-9qxnr_47119c19-fca4-4a63-8170-d4dee8201af8/keystone-api/0.log" Nov 25 15:34:59 crc kubenswrapper[4796]: I1125 15:34:59.101225 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_da9248b8-0e46-4c9a-837c-b5591fc3e559/kube-state-metrics/0.log" Nov 25 15:34:59 crc kubenswrapper[4796]: I1125 15:34:59.172873 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-r8wcb_5e5ea533-89ca-434d-bde5-0222fa319b66/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:34:59 crc kubenswrapper[4796]: I1125 15:34:59.586195 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b8d7f79d9-dhp4t_d300f40d-3177-4832-9df9-b724d40b8622/neutron-api/0.log" Nov 25 15:34:59 crc kubenswrapper[4796]: I1125 15:34:59.630799 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7b8d7f79d9-dhp4t_d300f40d-3177-4832-9df9-b724d40b8622/neutron-httpd/0.log" Nov 25 15:34:59 crc kubenswrapper[4796]: I1125 15:34:59.715075 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-brndt_3a001f8e-537d-4c17-88cd-b1c2a8727074/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:35:00 crc kubenswrapper[4796]: I1125 15:35:00.254646 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_86950200-06a3-4ad0-9a40-d70deeba8ce3/nova-api-log/0.log" Nov 25 15:35:00 crc kubenswrapper[4796]: I1125 15:35:00.328535 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_3d5bdd76-c116-469f-84a1-c869e4ffb5ce/nova-cell0-conductor-conductor/0.log" Nov 25 15:35:00 crc kubenswrapper[4796]: I1125 15:35:00.609264 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_614944f2-a1d3-41e0-82a4-3182bd6770af/nova-cell1-conductor-conductor/0.log" Nov 25 15:35:00 crc kubenswrapper[4796]: I1125 15:35:00.722013 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_86950200-06a3-4ad0-9a40-d70deeba8ce3/nova-api-api/0.log" Nov 25 15:35:01 crc kubenswrapper[4796]: I1125 15:35:01.254637 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a14facfc-22d1-4b36-a006-23af447aef93/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 15:35:01 crc kubenswrapper[4796]: I1125 15:35:01.306940 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5l2zn_8c595aba-53f4-47cf-9b97-c489fb013f6e/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:35:01 crc kubenswrapper[4796]: I1125 15:35:01.614418 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_74f7062a-bcf7-494e-81ff-955f99fd6707/nova-metadata-log/0.log" Nov 25 15:35:01 crc kubenswrapper[4796]: I1125 15:35:01.890212 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fba50302-0f98-4117-ae49-f710e1543e98/mysql-bootstrap/0.log" Nov 25 15:35:01 crc kubenswrapper[4796]: I1125 15:35:01.892542 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f40e0fe8-470b-4092-a179-4e4df56f8900/nova-scheduler-scheduler/0.log" Nov 25 15:35:02 crc kubenswrapper[4796]: I1125 15:35:02.030068 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fba50302-0f98-4117-ae49-f710e1543e98/mysql-bootstrap/0.log" Nov 25 15:35:02 crc kubenswrapper[4796]: I1125 15:35:02.147959 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fba50302-0f98-4117-ae49-f710e1543e98/galera/0.log" Nov 25 15:35:02 crc kubenswrapper[4796]: I1125 15:35:02.299680 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_25a388f4-cd5a-404d-a777-46f4410e0b3a/mysql-bootstrap/0.log" Nov 25 15:35:02 crc kubenswrapper[4796]: I1125 15:35:02.846237 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_25a388f4-cd5a-404d-a777-46f4410e0b3a/mysql-bootstrap/0.log" Nov 25 15:35:02 crc kubenswrapper[4796]: I1125 15:35:02.928247 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_74f7062a-bcf7-494e-81ff-955f99fd6707/nova-metadata-metadata/0.log" Nov 25 15:35:02 crc kubenswrapper[4796]: I1125 15:35:02.964887 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_25a388f4-cd5a-404d-a777-46f4410e0b3a/galera/0.log" Nov 25 15:35:03 crc kubenswrapper[4796]: I1125 15:35:03.056455 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_120f9ac5-531c-4821-b033-d4b316f6ea61/openstackclient/0.log" Nov 25 15:35:03 crc kubenswrapper[4796]: I1125 15:35:03.238294 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jftkt_9cdd5460-fb40-4f5a-9fcb-d4dcb1e05718/ovn-controller/0.log" Nov 25 15:35:03 crc kubenswrapper[4796]: I1125 15:35:03.362978 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t8mfd_5d31b742-a284-4a5f-a151-2ee4077a3071/openstack-network-exporter/0.log" Nov 25 15:35:03 crc kubenswrapper[4796]: I1125 15:35:03.449328 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bcptz_130773d9-cc1a-46d3-91a4-1880735e0351/ovsdb-server-init/0.log" Nov 25 15:35:03 crc kubenswrapper[4796]: I1125 15:35:03.757330 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bcptz_130773d9-cc1a-46d3-91a4-1880735e0351/ovsdb-server/0.log" Nov 25 15:35:03 crc kubenswrapper[4796]: I1125 15:35:03.813480 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bcptz_130773d9-cc1a-46d3-91a4-1880735e0351/ovsdb-server-init/0.log" Nov 25 15:35:03 crc kubenswrapper[4796]: I1125 15:35:03.823193 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bcptz_130773d9-cc1a-46d3-91a4-1880735e0351/ovs-vswitchd/0.log" Nov 25 15:35:03 crc kubenswrapper[4796]: I1125 15:35:03.988450 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fwnf6_a07af2cf-4057-4032-8535-6e8067892269/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.067696 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b5336ecd-5d7e-4b73-b2a7-d289b8578641/ovn-northd/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.070737 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b5336ecd-5d7e-4b73-b2a7-d289b8578641/openstack-network-exporter/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.313960 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064/openstack-network-exporter/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.317721 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9f7a16fc-0fd1-4d5b-ae32-9f7d95e8a064/ovsdbserver-nb/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.441160 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1c9e8c13-5a24-4394-bdc8-aa4965e931b8/openstack-network-exporter/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.561104 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1c9e8c13-5a24-4394-bdc8-aa4965e931b8/ovsdbserver-sb/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.666713 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79bd96dcd6-f2n5f_970dd58d-4266-4a39-9d8b-75190f4286bc/placement-api/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.791673 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f5d14d1f-b7c5-4d86-9420-fbf8a044780c/setup-container/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.805999 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79bd96dcd6-f2n5f_970dd58d-4266-4a39-9d8b-75190f4286bc/placement-log/0.log" Nov 25 15:35:04 crc kubenswrapper[4796]: I1125 15:35:04.995444 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f5d14d1f-b7c5-4d86-9420-fbf8a044780c/rabbitmq/0.log" Nov 25 15:35:05 crc kubenswrapper[4796]: I1125 15:35:05.025215 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f5d14d1f-b7c5-4d86-9420-fbf8a044780c/setup-container/0.log" Nov 25 15:35:05 crc kubenswrapper[4796]: I1125 15:35:05.112408 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0bde17cd-d557-45b1-8796-d7293d21c038/setup-container/0.log" Nov 25 15:35:05 crc kubenswrapper[4796]: I1125 15:35:05.321361 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0bde17cd-d557-45b1-8796-d7293d21c038/setup-container/0.log" Nov 25 15:35:05 crc kubenswrapper[4796]: I1125 15:35:05.341066 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0bde17cd-d557-45b1-8796-d7293d21c038/rabbitmq/0.log" Nov 25 15:35:05 crc kubenswrapper[4796]: I1125 15:35:05.440098 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-z6r4q_b8bdd873-343d-4d77-849e-14786c8db01d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:35:05 crc kubenswrapper[4796]: I1125 15:35:05.585313 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8rjcv_3fc16f66-6859-4f61-bdbb-7deaf5ec6831/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:35:05 crc kubenswrapper[4796]: I1125 15:35:05.708458 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fbb4q_6699babf-2b9f-432c-b0fd-60452bb9ad6b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:35:05 crc kubenswrapper[4796]: I1125 15:35:05.799541 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8pk87_39a7e6ad-f344-409f-b5a0-664a602fdf66/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:35:05 crc kubenswrapper[4796]: I1125 15:35:05.939955 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8zlff_60a504c5-7f00-43a4-a364-c3be0b31a42d/ssh-known-hosts-edpm-deployment/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.118084 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b6dc55d99-xcq8j_05a9e311-75a5-4732-9103-ba2bc1e708ad/proxy-server/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.217013 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b6dc55d99-xcq8j_05a9e311-75a5-4732-9103-ba2bc1e708ad/proxy-httpd/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.223527 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qbvtm_8a9e78aa-7f69-46de-b6a9-03f837e4f364/swift-ring-rebalance/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.378609 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/account-reaper/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.437206 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/account-auditor/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.511678 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/account-replicator/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.592758 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/account-server/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.621987 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/container-auditor/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.727242 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/container-replicator/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.732423 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/container-server/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.839517 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/container-updater/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.880739 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-auditor/0.log" Nov 25 15:35:06 crc kubenswrapper[4796]: I1125 15:35:06.940255 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-expirer/0.log" Nov 25 15:35:07 crc kubenswrapper[4796]: I1125 15:35:07.004031 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-replicator/0.log" Nov 25 15:35:07 crc kubenswrapper[4796]: I1125 15:35:07.013236 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-server/0.log" Nov 25 15:35:07 crc kubenswrapper[4796]: I1125 15:35:07.097548 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/object-updater/0.log" Nov 25 15:35:07 crc kubenswrapper[4796]: I1125 15:35:07.173647 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/rsync/0.log" Nov 25 15:35:07 crc kubenswrapper[4796]: I1125 15:35:07.197162 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49501e2a-5ad0-4de7-9b98-510c0c55863f/swift-recon-cron/0.log" Nov 25 15:35:07 crc kubenswrapper[4796]: I1125 15:35:07.353763 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-99788_885ec954-19ea-488f-badc-9dc879859a45/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:35:07 crc kubenswrapper[4796]: I1125 15:35:07.477713 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6a50c6fb-b1a0-468c-858a-e6ba3fd3cfe6/tempest-tests-tempest-tests-runner/0.log" Nov 25 15:35:07 crc kubenswrapper[4796]: I1125 15:35:07.580021 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_99ad25b2-341c-43c5-a15a-12b70e1711b3/test-operator-logs-container/0.log" Nov 25 15:35:07 crc kubenswrapper[4796]: I1125 15:35:07.723205 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7psgw_f7f8ec51-957f-4356-888b-5bec99691717/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:35:17 crc kubenswrapper[4796]: I1125 15:35:17.736907 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_241f82db-29d5-4cb8-bd81-3e758b9cd855/memcached/0.log" Nov 25 15:35:34 crc kubenswrapper[4796]: I1125 15:35:34.267812 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-7q45f_3472c0d0-0763-4342-83cb-5b7a44e5b2e0/kube-rbac-proxy/0.log" Nov 25 15:35:34 crc kubenswrapper[4796]: I1125 15:35:34.364852 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-7q45f_3472c0d0-0763-4342-83cb-5b7a44e5b2e0/manager/0.log" Nov 25 15:35:34 crc kubenswrapper[4796]: I1125 15:35:34.460462 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4w4wl_3a3976ed-e631-4fda-9b60-1e4b62992c70/kube-rbac-proxy/0.log" Nov 25 15:35:34 crc kubenswrapper[4796]: I1125 15:35:34.558122 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-4w4wl_3a3976ed-e631-4fda-9b60-1e4b62992c70/manager/0.log" Nov 25 15:35:34 crc kubenswrapper[4796]: I1125 15:35:34.662172 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-47sfh_efaf4581-131a-496d-ba2f-75db34748600/kube-rbac-proxy/0.log" Nov 25 15:35:34 crc kubenswrapper[4796]: I1125 15:35:34.736554 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-47sfh_efaf4581-131a-496d-ba2f-75db34748600/manager/0.log" Nov 25 15:35:34 crc kubenswrapper[4796]: I1125 15:35:34.813801 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/util/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.011381 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/pull/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.011540 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/pull/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.168026 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/extract/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.299630 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/util/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.302096 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/util/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.382386 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e9b406d0c1d2aea76df313e9f99efd7b723e69ce1f5778051c6023a4e66d46f_be76be41-9513-40eb-9140-8d3f2ab3a05d/pull/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.502933 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nfdb6_ed513bf3-e75f-40b3-814e-508f4d9e9ce6/kube-rbac-proxy/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.614960 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nfdb6_ed513bf3-e75f-40b3-814e-508f4d9e9ce6/manager/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.622814 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-w8gkv_4f74b624-2ef6-4289-8cb1-8d6babc260f5/kube-rbac-proxy/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.713676 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-w8gkv_4f74b624-2ef6-4289-8cb1-8d6babc260f5/manager/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.771123 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-7ljk7_5e82891b-b135-4f6a-8341-7ae6efb7d7ab/kube-rbac-proxy/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.826505 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-7ljk7_5e82891b-b135-4f6a-8341-7ae6efb7d7ab/manager/0.log" Nov 25 15:35:35 crc kubenswrapper[4796]: I1125 15:35:35.934301 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-tbmwj_9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff/kube-rbac-proxy/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.146075 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-tbmwj_9ec5036f-9a2f-4a3f-ad57-191ac97cf6ff/manager/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.156761 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-7xmhd_c20eb9b8-4c87-4145-b550-e887fd680797/manager/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.191220 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-7xmhd_c20eb9b8-4c87-4145-b550-e887fd680797/kube-rbac-proxy/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.348168 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-wqkh5_f1937d85-62aa-4880-81ca-91d58ab2fba2/kube-rbac-proxy/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.401729 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-wqkh5_f1937d85-62aa-4880-81ca-91d58ab2fba2/manager/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.436211 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-v9j5d_e5bf5c53-1a09-4635-9ebb-e2a6fb722e06/kube-rbac-proxy/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.561546 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-v9j5d_e5bf5c53-1a09-4635-9ebb-e2a6fb722e06/manager/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.593220 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-h96k8_7cda050e-831a-42f8-93f7-c33e10a8b119/kube-rbac-proxy/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.627117 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-h96k8_7cda050e-831a-42f8-93f7-c33e10a8b119/manager/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.762904 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-mfg66_b652a700-3131-4706-a300-c3f2c54519a3/kube-rbac-proxy/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.813640 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-mfg66_b652a700-3131-4706-a300-c3f2c54519a3/manager/0.log" Nov 25 15:35:36 crc kubenswrapper[4796]: I1125 15:35:36.931819 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-7c6bw_5575133b-4226-4a90-b484-aeb1bbcb4dde/kube-rbac-proxy/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.017340 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-jsccj_4e72b995-27a7-4777-9d17-7b04a3933074/kube-rbac-proxy/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.076133 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-7c6bw_5575133b-4226-4a90-b484-aeb1bbcb4dde/manager/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.132469 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-jsccj_4e72b995-27a7-4777-9d17-7b04a3933074/manager/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.211425 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh_399a4df5-120a-40fc-9570-4555ab767e70/kube-rbac-proxy/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.243360 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-jt7fh_399a4df5-120a-40fc-9570-4555ab767e70/manager/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.586320 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5fd4b8b4b5-s2rpd_742f74a5-8ef5-42df-8644-16b6209f5172/operator/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.609785 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-28ljh_edc88d92-5818-49e5-877c-5efd6a8e1912/registry-server/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.683372 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-jg56b_2d798aaf-7f02-472d-a5c9-53853ce7b2a4/kube-rbac-proxy/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.833595 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-jg56b_2d798aaf-7f02-472d-a5c9-53853ce7b2a4/manager/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.921523 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-z7q4q_217cf053-2a6e-4fbd-8544-830952c6c803/kube-rbac-proxy/0.log" Nov 25 15:35:37 crc kubenswrapper[4796]: I1125 15:35:37.987814 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-z7q4q_217cf053-2a6e-4fbd-8544-830952c6c803/manager/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.129079 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7b78n_833cc3da-1e55-4b00-9766-5bc81f81a506/operator/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.227933 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-6rrmf_5871d7ea-743f-4b9b-9d49-e02f51222ea7/kube-rbac-proxy/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.262280 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-6rrmf_5871d7ea-743f-4b9b-9d49-e02f51222ea7/manager/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.449785 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-2v9lc_bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e/kube-rbac-proxy/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.535078 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-2v9lc_bdc6cc60-f602-4a4e-9f3a-60fc12a9b29e/manager/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.538906 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77bf44fb75-9sjgx_909ee785-5087-4b08-9590-10993e0fdeba/manager/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.694218 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-6bbxk_dba98963-8ddb-46d0-a6a7-62f337d6d520/kube-rbac-proxy/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.697305 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-6bbxk_dba98963-8ddb-46d0-a6a7-62f337d6d520/manager/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.788598 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-99zgm_312c47f9-34dd-4416-b396-fd4f9855e72e/manager/0.log" Nov 25 15:35:38 crc kubenswrapper[4796]: I1125 15:35:38.818841 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-99zgm_312c47f9-34dd-4416-b396-fd4f9855e72e/kube-rbac-proxy/0.log" Nov 25 15:35:49 crc kubenswrapper[4796]: I1125 15:35:49.513626 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:35:49 crc kubenswrapper[4796]: I1125 15:35:49.515714 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:35:56 crc kubenswrapper[4796]: I1125 15:35:56.681181 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-64jzs_63aeb87d-a8b1-40a5-95b9-e224d1bd968f/control-plane-machine-set-operator/0.log" Nov 25 15:35:56 crc kubenswrapper[4796]: I1125 15:35:56.838796 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lvdx5_67c0424c-b0ff-417d-bf4c-1cdcadd1ebac/kube-rbac-proxy/0.log" Nov 25 15:35:56 crc kubenswrapper[4796]: I1125 15:35:56.856309 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lvdx5_67c0424c-b0ff-417d-bf4c-1cdcadd1ebac/machine-api-operator/0.log" Nov 25 15:36:09 crc kubenswrapper[4796]: I1125 15:36:09.527869 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-qzs2l_4b5c4e21-18ed-4eee-a81a-f08cf71498e5/cert-manager-controller/0.log" Nov 25 15:36:09 crc kubenswrapper[4796]: I1125 15:36:09.726533 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-ttph6_67aeab52-9ff0-430d-8e78-0f46f59e1688/cert-manager-cainjector/0.log" Nov 25 15:36:09 crc kubenswrapper[4796]: I1125 15:36:09.766900 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-n7x98_d7365735-d514-48fd-9113-62a80d791d8b/cert-manager-webhook/0.log" Nov 25 15:36:19 crc kubenswrapper[4796]: I1125 15:36:19.513829 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:36:19 crc kubenswrapper[4796]: I1125 15:36:19.514810 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:36:22 crc kubenswrapper[4796]: I1125 15:36:22.193767 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-74nqq_ebb6d789-f33f-47d5-a8b5-b727a0d54def/nmstate-console-plugin/0.log" Nov 25 15:36:22 crc kubenswrapper[4796]: I1125 15:36:22.332857 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5whlr_d050fb17-6f98-4899-861e-b180f1587b64/nmstate-handler/0.log" Nov 25 15:36:22 crc kubenswrapper[4796]: I1125 15:36:22.365336 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-z2g7r_b129a211-721a-412c-95fd-a1c27b7d3092/kube-rbac-proxy/0.log" Nov 25 15:36:22 crc kubenswrapper[4796]: I1125 15:36:22.399356 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-z2g7r_b129a211-721a-412c-95fd-a1c27b7d3092/nmstate-metrics/0.log" Nov 25 15:36:22 crc kubenswrapper[4796]: I1125 15:36:22.581853 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-kcqf5_5c8c5a1b-b996-41da-96ab-07156e73016f/nmstate-operator/0.log" Nov 25 15:36:22 crc kubenswrapper[4796]: I1125 15:36:22.636066 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-2mjnf_7bcb5530-fd67-4fc7-96c1-dfdb9dd8ad67/nmstate-webhook/0.log" Nov 25 15:36:38 crc kubenswrapper[4796]: I1125 15:36:38.593312 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-zr8xl_1979dccd-b017-42f5-9fe1-8717af3f948a/kube-rbac-proxy/0.log" Nov 25 15:36:38 crc kubenswrapper[4796]: I1125 15:36:38.623766 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-zr8xl_1979dccd-b017-42f5-9fe1-8717af3f948a/controller/0.log" Nov 25 15:36:38 crc kubenswrapper[4796]: I1125 15:36:38.769386 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-frr-files/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.007336 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-frr-files/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.018694 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-metrics/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.038470 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-reloader/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.054061 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-reloader/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.249518 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-reloader/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.262198 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-frr-files/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.277213 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-metrics/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.278416 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-metrics/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.443840 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-reloader/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.452713 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-frr-files/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.550797 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/cp-metrics/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.553548 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/controller/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.716771 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/frr-metrics/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.746909 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/kube-rbac-proxy/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.761009 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/kube-rbac-proxy-frr/0.log" Nov 25 15:36:39 crc kubenswrapper[4796]: I1125 15:36:39.999358 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/reloader/0.log" Nov 25 15:36:40 crc kubenswrapper[4796]: I1125 15:36:40.024371 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-zk9xk_79869a5f-b9a3-46e0-bac7-9ff9ac72b16c/frr-k8s-webhook-server/0.log" Nov 25 15:36:40 crc kubenswrapper[4796]: I1125 15:36:40.287662 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68786bb9d9-qc95x_5f701779-96c6-4764-b207-88847114d7c8/manager/0.log" Nov 25 15:36:40 crc kubenswrapper[4796]: I1125 15:36:40.348225 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-778544677-4pg8n_5a58cf97-35a8-4201-91b5-c03fce0361b8/webhook-server/0.log" Nov 25 15:36:40 crc kubenswrapper[4796]: I1125 15:36:40.634270 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq8m7_7f037a6b-9e7f-401d-b4db-98132fb0f9b2/kube-rbac-proxy/0.log" Nov 25 15:36:41 crc kubenswrapper[4796]: I1125 15:36:41.074874 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kq8m7_7f037a6b-9e7f-401d-b4db-98132fb0f9b2/speaker/0.log" Nov 25 15:36:41 crc kubenswrapper[4796]: I1125 15:36:41.081953 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vhclt_4fc70054-d9cd-4545-b9e7-d6665887e94d/frr/0.log" Nov 25 15:36:49 crc kubenswrapper[4796]: I1125 15:36:49.514442 4796 patch_prober.go:28] interesting pod/machine-config-daemon-h6xfl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:36:49 crc kubenswrapper[4796]: I1125 15:36:49.515123 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:36:49 crc kubenswrapper[4796]: I1125 15:36:49.515176 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" Nov 25 15:36:49 crc kubenswrapper[4796]: I1125 15:36:49.516347 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927"} pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:36:49 crc kubenswrapper[4796]: I1125 15:36:49.516425 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerName="machine-config-daemon" containerID="cri-o://e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" gracePeriod=600 Nov 25 15:36:49 crc kubenswrapper[4796]: E1125 15:36:49.639506 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:36:50 crc kubenswrapper[4796]: I1125 15:36:50.045236 4796 generic.go:334] "Generic (PLEG): container finished" podID="c683b765-b1f2-49b1-b29d-6466cda73ca8" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" exitCode=0 Nov 25 15:36:50 crc kubenswrapper[4796]: I1125 15:36:50.045308 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerDied","Data":"e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927"} Nov 25 15:36:50 crc kubenswrapper[4796]: I1125 15:36:50.045709 4796 scope.go:117] "RemoveContainer" containerID="ab149f04ee33eb6fa179e2fae0783da6b2b9681d3eecf03b9d858d176b0d61b6" Nov 25 15:36:50 crc kubenswrapper[4796]: I1125 15:36:50.046398 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:36:50 crc kubenswrapper[4796]: E1125 15:36:50.046838 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:36:55 crc kubenswrapper[4796]: I1125 15:36:55.257380 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/util/0.log" Nov 25 15:36:55 crc kubenswrapper[4796]: I1125 15:36:55.512831 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/pull/0.log" Nov 25 15:36:55 crc kubenswrapper[4796]: I1125 15:36:55.513225 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/pull/0.log" Nov 25 15:36:55 crc kubenswrapper[4796]: I1125 15:36:55.537694 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/util/0.log" Nov 25 15:36:56 crc kubenswrapper[4796]: I1125 15:36:56.371869 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/util/0.log" Nov 25 15:36:56 crc kubenswrapper[4796]: I1125 15:36:56.506493 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/pull/0.log" Nov 25 15:36:56 crc kubenswrapper[4796]: I1125 15:36:56.516717 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772epqzxq_1fee00b0-68b7-43d4-85a5-d63daf73962d/extract/0.log" Nov 25 15:36:56 crc kubenswrapper[4796]: I1125 15:36:56.636218 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-utilities/0.log" Nov 25 15:36:56 crc kubenswrapper[4796]: I1125 15:36:56.792989 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-content/0.log" Nov 25 15:36:56 crc kubenswrapper[4796]: I1125 15:36:56.801361 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-utilities/0.log" Nov 25 15:36:56 crc kubenswrapper[4796]: I1125 15:36:56.830624 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-content/0.log" Nov 25 15:36:57 crc kubenswrapper[4796]: I1125 15:36:57.008545 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-utilities/0.log" Nov 25 15:36:57 crc kubenswrapper[4796]: I1125 15:36:57.011218 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/extract-content/0.log" Nov 25 15:36:57 crc kubenswrapper[4796]: I1125 15:36:57.305136 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-utilities/0.log" Nov 25 15:36:57 crc kubenswrapper[4796]: I1125 15:36:57.527518 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-utilities/0.log" Nov 25 15:36:57 crc kubenswrapper[4796]: I1125 15:36:57.560797 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-content/0.log" Nov 25 15:36:57 crc kubenswrapper[4796]: I1125 15:36:57.587922 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-content/0.log" Nov 25 15:36:57 crc kubenswrapper[4796]: I1125 15:36:57.746206 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwqht_60c5e697-1e70-4d50-a2ed-f7dba77a5520/registry-server/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.173037 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-content/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.204807 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/extract-utilities/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.363542 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/util/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.398074 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7dmr_7d7052ec-4340-472c-8add-94483920eeac/registry-server/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.602188 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/pull/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.612285 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/util/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.642129 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/pull/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.848227 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/util/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.861014 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/pull/0.log" Nov 25 15:36:58 crc kubenswrapper[4796]: I1125 15:36:58.902943 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lk4fb_655b2cd8-b6a5-4ab4-848d-908496b6bcc8/extract/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.029682 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hfxxz_f1695f85-c20b-4708-b4f0-006f3a269301/marketplace-operator/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.262243 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-utilities/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.438804 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-content/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.455013 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-utilities/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.494035 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-content/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.643322 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-utilities/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.672664 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/extract-content/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.755550 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-utilities/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.866261 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jbwpd_9b81a274-2b8a-4f1b-8890-ffa61ef91055/registry-server/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.939138 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-content/0.log" Nov 25 15:36:59 crc kubenswrapper[4796]: I1125 15:36:59.976405 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-utilities/0.log" Nov 25 15:37:00 crc kubenswrapper[4796]: I1125 15:37:00.025171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-content/0.log" Nov 25 15:37:00 crc kubenswrapper[4796]: I1125 15:37:00.290472 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-utilities/0.log" Nov 25 15:37:00 crc kubenswrapper[4796]: I1125 15:37:00.301866 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/extract-content/0.log" Nov 25 15:37:00 crc kubenswrapper[4796]: I1125 15:37:00.473354 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f5xps_5b44682b-4eeb-434a-a769-94289e240d6e/registry-server/0.log" Nov 25 15:37:01 crc kubenswrapper[4796]: I1125 15:37:01.409052 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:37:01 crc kubenswrapper[4796]: E1125 15:37:01.409327 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:37:16 crc kubenswrapper[4796]: I1125 15:37:16.410104 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:37:16 crc kubenswrapper[4796]: E1125 15:37:16.411117 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.456561 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4dwxt"] Nov 25 15:37:21 crc kubenswrapper[4796]: E1125 15:37:21.457733 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3d50e0-dfd8-4128-a799-61a9dc8600e2" containerName="container-00" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.457751 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3d50e0-dfd8-4128-a799-61a9dc8600e2" containerName="container-00" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.457987 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3d50e0-dfd8-4128-a799-61a9dc8600e2" containerName="container-00" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.459795 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.464827 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4dwxt"] Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.505885 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgkd\" (UniqueName: \"kubernetes.io/projected/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-kube-api-access-rqgkd\") pod \"community-operators-4dwxt\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.505979 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-utilities\") pod \"community-operators-4dwxt\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.506051 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-catalog-content\") pod \"community-operators-4dwxt\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.607411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-utilities\") pod \"community-operators-4dwxt\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.607530 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-catalog-content\") pod \"community-operators-4dwxt\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.607617 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgkd\" (UniqueName: \"kubernetes.io/projected/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-kube-api-access-rqgkd\") pod \"community-operators-4dwxt\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.607867 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-utilities\") pod \"community-operators-4dwxt\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.608084 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-catalog-content\") pod \"community-operators-4dwxt\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.638790 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgkd\" (UniqueName: \"kubernetes.io/projected/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-kube-api-access-rqgkd\") pod \"community-operators-4dwxt\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:21 crc kubenswrapper[4796]: I1125 15:37:21.813512 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:22 crc kubenswrapper[4796]: I1125 15:37:22.430167 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4dwxt"] Nov 25 15:37:23 crc kubenswrapper[4796]: I1125 15:37:23.386177 4796 generic.go:334] "Generic (PLEG): container finished" podID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerID="2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905" exitCode=0 Nov 25 15:37:23 crc kubenswrapper[4796]: I1125 15:37:23.386345 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwxt" event={"ID":"fc8de44e-7336-4f18-9d90-c5a5118b7cf7","Type":"ContainerDied","Data":"2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905"} Nov 25 15:37:23 crc kubenswrapper[4796]: I1125 15:37:23.386531 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwxt" event={"ID":"fc8de44e-7336-4f18-9d90-c5a5118b7cf7","Type":"ContainerStarted","Data":"85b44d00cbaa4b4c9c86f076ffaa3f02940aff49a59edab3f5195ce3dfc53285"} Nov 25 15:37:23 crc kubenswrapper[4796]: I1125 15:37:23.388933 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:37:24 crc kubenswrapper[4796]: I1125 15:37:24.406032 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwxt" event={"ID":"fc8de44e-7336-4f18-9d90-c5a5118b7cf7","Type":"ContainerStarted","Data":"faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b"} Nov 25 15:37:26 crc kubenswrapper[4796]: I1125 15:37:26.441166 4796 generic.go:334] "Generic (PLEG): container finished" podID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerID="faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b" exitCode=0 Nov 25 15:37:26 crc kubenswrapper[4796]: I1125 15:37:26.441335 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwxt" event={"ID":"fc8de44e-7336-4f18-9d90-c5a5118b7cf7","Type":"ContainerDied","Data":"faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b"} Nov 25 15:37:27 crc kubenswrapper[4796]: I1125 15:37:27.453931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwxt" event={"ID":"fc8de44e-7336-4f18-9d90-c5a5118b7cf7","Type":"ContainerStarted","Data":"7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0"} Nov 25 15:37:27 crc kubenswrapper[4796]: I1125 15:37:27.473984 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4dwxt" podStartSLOduration=2.810278971 podStartE2EDuration="6.473964361s" podCreationTimestamp="2025-11-25 15:37:21 +0000 UTC" firstStartedPulling="2025-11-25 15:37:23.388641016 +0000 UTC m=+4371.731750440" lastFinishedPulling="2025-11-25 15:37:27.052326406 +0000 UTC m=+4375.395435830" observedRunningTime="2025-11-25 15:37:27.470177992 +0000 UTC m=+4375.813287416" watchObservedRunningTime="2025-11-25 15:37:27.473964361 +0000 UTC m=+4375.817073785" Nov 25 15:37:30 crc kubenswrapper[4796]: I1125 15:37:30.413633 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:37:30 crc kubenswrapper[4796]: E1125 15:37:30.414537 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:37:30 crc kubenswrapper[4796]: E1125 15:37:30.975011 4796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.227:49606->38.102.83.227:35215: write tcp 38.102.83.227:49606->38.102.83.227:35215: write: broken pipe Nov 25 15:37:31 crc kubenswrapper[4796]: I1125 15:37:31.814458 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:31 crc kubenswrapper[4796]: I1125 15:37:31.814525 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:31 crc kubenswrapper[4796]: I1125 15:37:31.871171 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:32 crc kubenswrapper[4796]: I1125 15:37:32.561557 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:32 crc kubenswrapper[4796]: I1125 15:37:32.623821 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4dwxt"] Nov 25 15:37:34 crc kubenswrapper[4796]: I1125 15:37:34.511482 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4dwxt" podUID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerName="registry-server" containerID="cri-o://7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0" gracePeriod=2 Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.046065 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.056528 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqgkd\" (UniqueName: \"kubernetes.io/projected/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-kube-api-access-rqgkd\") pod \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.056712 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-utilities\") pod \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.056825 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-catalog-content\") pod \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\" (UID: \"fc8de44e-7336-4f18-9d90-c5a5118b7cf7\") " Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.057800 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-utilities" (OuterVolumeSpecName: "utilities") pod "fc8de44e-7336-4f18-9d90-c5a5118b7cf7" (UID: "fc8de44e-7336-4f18-9d90-c5a5118b7cf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.061857 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-kube-api-access-rqgkd" (OuterVolumeSpecName: "kube-api-access-rqgkd") pod "fc8de44e-7336-4f18-9d90-c5a5118b7cf7" (UID: "fc8de44e-7336-4f18-9d90-c5a5118b7cf7"). InnerVolumeSpecName "kube-api-access-rqgkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.128296 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc8de44e-7336-4f18-9d90-c5a5118b7cf7" (UID: "fc8de44e-7336-4f18-9d90-c5a5118b7cf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.159136 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqgkd\" (UniqueName: \"kubernetes.io/projected/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-kube-api-access-rqgkd\") on node \"crc\" DevicePath \"\"" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.159221 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.159282 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc8de44e-7336-4f18-9d90-c5a5118b7cf7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.524915 4796 generic.go:334] "Generic (PLEG): container finished" podID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerID="7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0" exitCode=0 Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.525038 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwxt" event={"ID":"fc8de44e-7336-4f18-9d90-c5a5118b7cf7","Type":"ContainerDied","Data":"7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0"} Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.525348 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4dwxt" event={"ID":"fc8de44e-7336-4f18-9d90-c5a5118b7cf7","Type":"ContainerDied","Data":"85b44d00cbaa4b4c9c86f076ffaa3f02940aff49a59edab3f5195ce3dfc53285"} Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.525373 4796 scope.go:117] "RemoveContainer" containerID="7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.525086 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4dwxt" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.570109 4796 scope.go:117] "RemoveContainer" containerID="faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.570503 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4dwxt"] Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.586754 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4dwxt"] Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.599650 4796 scope.go:117] "RemoveContainer" containerID="2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.638149 4796 scope.go:117] "RemoveContainer" containerID="7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0" Nov 25 15:37:35 crc kubenswrapper[4796]: E1125 15:37:35.638563 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0\": container with ID starting with 7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0 not found: ID does not exist" containerID="7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.638611 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0"} err="failed to get container status \"7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0\": rpc error: code = NotFound desc = could not find container \"7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0\": container with ID starting with 7cfca68527d126d27f3ed2081d474378a971cbfb67ee23c9307ff41d287542f0 not found: ID does not exist" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.638630 4796 scope.go:117] "RemoveContainer" containerID="faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b" Nov 25 15:37:35 crc kubenswrapper[4796]: E1125 15:37:35.639061 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b\": container with ID starting with faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b not found: ID does not exist" containerID="faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.639083 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b"} err="failed to get container status \"faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b\": rpc error: code = NotFound desc = could not find container \"faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b\": container with ID starting with faefafe3de08b38ef23e414d72680d6b2fbdd3f022decdbe480f58afa519352b not found: ID does not exist" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.639097 4796 scope.go:117] "RemoveContainer" containerID="2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905" Nov 25 15:37:35 crc kubenswrapper[4796]: E1125 15:37:35.639400 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905\": container with ID starting with 2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905 not found: ID does not exist" containerID="2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905" Nov 25 15:37:35 crc kubenswrapper[4796]: I1125 15:37:35.639415 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905"} err="failed to get container status \"2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905\": rpc error: code = NotFound desc = could not find container \"2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905\": container with ID starting with 2915e0819680837a2104a476ddfacd584ad16812cb4e0590f7b233f3e0ad5905 not found: ID does not exist" Nov 25 15:37:36 crc kubenswrapper[4796]: I1125 15:37:36.428217 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" path="/var/lib/kubelet/pods/fc8de44e-7336-4f18-9d90-c5a5118b7cf7/volumes" Nov 25 15:37:45 crc kubenswrapper[4796]: I1125 15:37:45.410326 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:37:45 crc kubenswrapper[4796]: E1125 15:37:45.411235 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:37:57 crc kubenswrapper[4796]: I1125 15:37:57.409814 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:37:57 crc kubenswrapper[4796]: E1125 15:37:57.410722 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:38:12 crc kubenswrapper[4796]: I1125 15:38:12.422017 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:38:12 crc kubenswrapper[4796]: E1125 15:38:12.424295 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:38:24 crc kubenswrapper[4796]: I1125 15:38:24.410068 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:38:24 crc kubenswrapper[4796]: E1125 15:38:24.410807 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:38:35 crc kubenswrapper[4796]: I1125 15:38:35.409306 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:38:35 crc kubenswrapper[4796]: E1125 15:38:35.410523 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:38:42 crc kubenswrapper[4796]: I1125 15:38:42.205194 4796 generic.go:334] "Generic (PLEG): container finished" podID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" containerID="690344763ce68ff6d3fff3f98e53903915a239f2ca89b7a75d2a2f2545c8b483" exitCode=0 Nov 25 15:38:42 crc kubenswrapper[4796]: I1125 15:38:42.205665 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-j2hr4/must-gather-j88d4" event={"ID":"d4d5bc49-b766-4225-be3b-b2cc78c22ae9","Type":"ContainerDied","Data":"690344763ce68ff6d3fff3f98e53903915a239f2ca89b7a75d2a2f2545c8b483"} Nov 25 15:38:42 crc kubenswrapper[4796]: I1125 15:38:42.206542 4796 scope.go:117] "RemoveContainer" containerID="690344763ce68ff6d3fff3f98e53903915a239f2ca89b7a75d2a2f2545c8b483" Nov 25 15:38:42 crc kubenswrapper[4796]: I1125 15:38:42.278884 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j2hr4_must-gather-j88d4_d4d5bc49-b766-4225-be3b-b2cc78c22ae9/gather/0.log" Nov 25 15:38:50 crc kubenswrapper[4796]: I1125 15:38:50.409424 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:38:50 crc kubenswrapper[4796]: E1125 15:38:50.410314 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.174985 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-j2hr4/must-gather-j88d4"] Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.176187 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-j2hr4/must-gather-j88d4" podUID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" containerName="copy" containerID="cri-o://0d1c5b69a9120ff5c5caabd0c03916eb46abe06bd2b6f328ca1313704a61f118" gracePeriod=2 Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.185930 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-j2hr4/must-gather-j88d4"] Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.647483 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j2hr4_must-gather-j88d4_d4d5bc49-b766-4225-be3b-b2cc78c22ae9/copy/0.log" Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.648216 4796 generic.go:334] "Generic (PLEG): container finished" podID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" containerID="0d1c5b69a9120ff5c5caabd0c03916eb46abe06bd2b6f328ca1313704a61f118" exitCode=143 Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.648324 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2059e46a6fe6148a9615368363a928dd9db6ef95806d13c195c0e908042e6e8" Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.702523 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-j2hr4_must-gather-j88d4_d4d5bc49-b766-4225-be3b-b2cc78c22ae9/copy/0.log" Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.703185 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.750898 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-must-gather-output\") pod \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\" (UID: \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\") " Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.751354 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp86g\" (UniqueName: \"kubernetes.io/projected/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-kube-api-access-cp86g\") pod \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\" (UID: \"d4d5bc49-b766-4225-be3b-b2cc78c22ae9\") " Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.756914 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-kube-api-access-cp86g" (OuterVolumeSpecName: "kube-api-access-cp86g") pod "d4d5bc49-b766-4225-be3b-b2cc78c22ae9" (UID: "d4d5bc49-b766-4225-be3b-b2cc78c22ae9"). InnerVolumeSpecName "kube-api-access-cp86g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.854814 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp86g\" (UniqueName: \"kubernetes.io/projected/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-kube-api-access-cp86g\") on node \"crc\" DevicePath \"\"" Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.898439 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d4d5bc49-b766-4225-be3b-b2cc78c22ae9" (UID: "d4d5bc49-b766-4225-be3b-b2cc78c22ae9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:38:52 crc kubenswrapper[4796]: I1125 15:38:52.956840 4796 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d4d5bc49-b766-4225-be3b-b2cc78c22ae9-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 15:38:53 crc kubenswrapper[4796]: I1125 15:38:53.656903 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-j2hr4/must-gather-j88d4" Nov 25 15:38:54 crc kubenswrapper[4796]: I1125 15:38:54.421186 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" path="/var/lib/kubelet/pods/d4d5bc49-b766-4225-be3b-b2cc78c22ae9/volumes" Nov 25 15:39:01 crc kubenswrapper[4796]: I1125 15:39:01.409223 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:39:01 crc kubenswrapper[4796]: E1125 15:39:01.410087 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:39:16 crc kubenswrapper[4796]: I1125 15:39:16.409968 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:39:16 crc kubenswrapper[4796]: E1125 15:39:16.410748 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:39:31 crc kubenswrapper[4796]: I1125 15:39:31.409218 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:39:31 crc kubenswrapper[4796]: E1125 15:39:31.410260 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:39:44 crc kubenswrapper[4796]: I1125 15:39:44.409390 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:39:44 crc kubenswrapper[4796]: E1125 15:39:44.410082 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:39:49 crc kubenswrapper[4796]: I1125 15:39:49.858968 4796 scope.go:117] "RemoveContainer" containerID="690344763ce68ff6d3fff3f98e53903915a239f2ca89b7a75d2a2f2545c8b483" Nov 25 15:39:49 crc kubenswrapper[4796]: I1125 15:39:49.964328 4796 scope.go:117] "RemoveContainer" containerID="0d1c5b69a9120ff5c5caabd0c03916eb46abe06bd2b6f328ca1313704a61f118" Nov 25 15:39:55 crc kubenswrapper[4796]: I1125 15:39:55.410605 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:39:55 crc kubenswrapper[4796]: E1125 15:39:55.411547 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.036723 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9ldmr"] Nov 25 15:40:05 crc kubenswrapper[4796]: E1125 15:40:05.038069 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerName="registry-server" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.038094 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerName="registry-server" Nov 25 15:40:05 crc kubenswrapper[4796]: E1125 15:40:05.038131 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerName="extract-utilities" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.038147 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerName="extract-utilities" Nov 25 15:40:05 crc kubenswrapper[4796]: E1125 15:40:05.038181 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" containerName="gather" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.038195 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" containerName="gather" Nov 25 15:40:05 crc kubenswrapper[4796]: E1125 15:40:05.038229 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerName="extract-content" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.038241 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerName="extract-content" Nov 25 15:40:05 crc kubenswrapper[4796]: E1125 15:40:05.038277 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" containerName="copy" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.038289 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" containerName="copy" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.038688 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8de44e-7336-4f18-9d90-c5a5118b7cf7" containerName="registry-server" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.038720 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" containerName="gather" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.038747 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d5bc49-b766-4225-be3b-b2cc78c22ae9" containerName="copy" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.044020 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.049180 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ldmr"] Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.199090 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-catalog-content\") pod \"redhat-operators-9ldmr\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.199439 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-utilities\") pod \"redhat-operators-9ldmr\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.199503 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7swht\" (UniqueName: \"kubernetes.io/projected/4111079e-cce1-4697-8608-067b6ea68dde-kube-api-access-7swht\") pod \"redhat-operators-9ldmr\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.301352 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7swht\" (UniqueName: \"kubernetes.io/projected/4111079e-cce1-4697-8608-067b6ea68dde-kube-api-access-7swht\") pod \"redhat-operators-9ldmr\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.301441 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-catalog-content\") pod \"redhat-operators-9ldmr\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.301552 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-utilities\") pod \"redhat-operators-9ldmr\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.302055 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-utilities\") pod \"redhat-operators-9ldmr\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.302367 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-catalog-content\") pod \"redhat-operators-9ldmr\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.349020 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7swht\" (UniqueName: \"kubernetes.io/projected/4111079e-cce1-4697-8608-067b6ea68dde-kube-api-access-7swht\") pod \"redhat-operators-9ldmr\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.406903 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:05 crc kubenswrapper[4796]: I1125 15:40:05.921769 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ldmr"] Nov 25 15:40:06 crc kubenswrapper[4796]: I1125 15:40:06.408109 4796 generic.go:334] "Generic (PLEG): container finished" podID="4111079e-cce1-4697-8608-067b6ea68dde" containerID="0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5" exitCode=0 Nov 25 15:40:06 crc kubenswrapper[4796]: I1125 15:40:06.408162 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ldmr" event={"ID":"4111079e-cce1-4697-8608-067b6ea68dde","Type":"ContainerDied","Data":"0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5"} Nov 25 15:40:06 crc kubenswrapper[4796]: I1125 15:40:06.408194 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ldmr" event={"ID":"4111079e-cce1-4697-8608-067b6ea68dde","Type":"ContainerStarted","Data":"935a93fc09d5e954f1ba645ac485b8839a7ba0bee1bb3be71aac0414d16efa6f"} Nov 25 15:40:07 crc kubenswrapper[4796]: I1125 15:40:07.410480 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:40:07 crc kubenswrapper[4796]: E1125 15:40:07.411516 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:40:07 crc kubenswrapper[4796]: I1125 15:40:07.421974 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ldmr" event={"ID":"4111079e-cce1-4697-8608-067b6ea68dde","Type":"ContainerStarted","Data":"08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945"} Nov 25 15:40:10 crc kubenswrapper[4796]: I1125 15:40:10.453788 4796 generic.go:334] "Generic (PLEG): container finished" podID="4111079e-cce1-4697-8608-067b6ea68dde" containerID="08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945" exitCode=0 Nov 25 15:40:10 crc kubenswrapper[4796]: I1125 15:40:10.453872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ldmr" event={"ID":"4111079e-cce1-4697-8608-067b6ea68dde","Type":"ContainerDied","Data":"08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945"} Nov 25 15:40:11 crc kubenswrapper[4796]: I1125 15:40:11.466772 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ldmr" event={"ID":"4111079e-cce1-4697-8608-067b6ea68dde","Type":"ContainerStarted","Data":"3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98"} Nov 25 15:40:11 crc kubenswrapper[4796]: I1125 15:40:11.496358 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9ldmr" podStartSLOduration=2.045758372 podStartE2EDuration="6.496334639s" podCreationTimestamp="2025-11-25 15:40:05 +0000 UTC" firstStartedPulling="2025-11-25 15:40:06.411765306 +0000 UTC m=+4534.754874740" lastFinishedPulling="2025-11-25 15:40:10.862341573 +0000 UTC m=+4539.205451007" observedRunningTime="2025-11-25 15:40:11.48999816 +0000 UTC m=+4539.833107584" watchObservedRunningTime="2025-11-25 15:40:11.496334639 +0000 UTC m=+4539.839444073" Nov 25 15:40:15 crc kubenswrapper[4796]: I1125 15:40:15.407039 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:15 crc kubenswrapper[4796]: I1125 15:40:15.407414 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:16 crc kubenswrapper[4796]: I1125 15:40:16.461685 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9ldmr" podUID="4111079e-cce1-4697-8608-067b6ea68dde" containerName="registry-server" probeResult="failure" output=< Nov 25 15:40:16 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Nov 25 15:40:16 crc kubenswrapper[4796]: > Nov 25 15:40:22 crc kubenswrapper[4796]: I1125 15:40:22.420842 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:40:22 crc kubenswrapper[4796]: E1125 15:40:22.421894 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:40:25 crc kubenswrapper[4796]: I1125 15:40:25.452961 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:25 crc kubenswrapper[4796]: I1125 15:40:25.503166 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:25 crc kubenswrapper[4796]: I1125 15:40:25.695514 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ldmr"] Nov 25 15:40:26 crc kubenswrapper[4796]: I1125 15:40:26.632133 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9ldmr" podUID="4111079e-cce1-4697-8608-067b6ea68dde" containerName="registry-server" containerID="cri-o://3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98" gracePeriod=2 Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.130508 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.241811 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-catalog-content\") pod \"4111079e-cce1-4697-8608-067b6ea68dde\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.242014 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-utilities\") pod \"4111079e-cce1-4697-8608-067b6ea68dde\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.242155 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7swht\" (UniqueName: \"kubernetes.io/projected/4111079e-cce1-4697-8608-067b6ea68dde-kube-api-access-7swht\") pod \"4111079e-cce1-4697-8608-067b6ea68dde\" (UID: \"4111079e-cce1-4697-8608-067b6ea68dde\") " Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.243870 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-utilities" (OuterVolumeSpecName: "utilities") pod "4111079e-cce1-4697-8608-067b6ea68dde" (UID: "4111079e-cce1-4697-8608-067b6ea68dde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.249530 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4111079e-cce1-4697-8608-067b6ea68dde-kube-api-access-7swht" (OuterVolumeSpecName: "kube-api-access-7swht") pod "4111079e-cce1-4697-8608-067b6ea68dde" (UID: "4111079e-cce1-4697-8608-067b6ea68dde"). InnerVolumeSpecName "kube-api-access-7swht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.344483 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.344527 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7swht\" (UniqueName: \"kubernetes.io/projected/4111079e-cce1-4697-8608-067b6ea68dde-kube-api-access-7swht\") on node \"crc\" DevicePath \"\"" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.357291 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4111079e-cce1-4697-8608-067b6ea68dde" (UID: "4111079e-cce1-4697-8608-067b6ea68dde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.446169 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4111079e-cce1-4697-8608-067b6ea68dde-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.644485 4796 generic.go:334] "Generic (PLEG): container finished" podID="4111079e-cce1-4697-8608-067b6ea68dde" containerID="3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98" exitCode=0 Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.644556 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ldmr" event={"ID":"4111079e-cce1-4697-8608-067b6ea68dde","Type":"ContainerDied","Data":"3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98"} Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.644610 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ldmr" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.644645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ldmr" event={"ID":"4111079e-cce1-4697-8608-067b6ea68dde","Type":"ContainerDied","Data":"935a93fc09d5e954f1ba645ac485b8839a7ba0bee1bb3be71aac0414d16efa6f"} Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.644677 4796 scope.go:117] "RemoveContainer" containerID="3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.666873 4796 scope.go:117] "RemoveContainer" containerID="08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.702262 4796 scope.go:117] "RemoveContainer" containerID="0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.706519 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ldmr"] Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.714322 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9ldmr"] Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.767278 4796 scope.go:117] "RemoveContainer" containerID="3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98" Nov 25 15:40:27 crc kubenswrapper[4796]: E1125 15:40:27.768437 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98\": container with ID starting with 3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98 not found: ID does not exist" containerID="3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.768484 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98"} err="failed to get container status \"3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98\": rpc error: code = NotFound desc = could not find container \"3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98\": container with ID starting with 3e5638a6f979eb4c415101a3db297ca9b5ff762da93fcf7b21be0c50411c5f98 not found: ID does not exist" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.768521 4796 scope.go:117] "RemoveContainer" containerID="08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945" Nov 25 15:40:27 crc kubenswrapper[4796]: E1125 15:40:27.769076 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945\": container with ID starting with 08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945 not found: ID does not exist" containerID="08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.769103 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945"} err="failed to get container status \"08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945\": rpc error: code = NotFound desc = could not find container \"08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945\": container with ID starting with 08d359945a4954d0513a05c961a8c91dd4aad1585825ab20326ee4da27fea945 not found: ID does not exist" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.769123 4796 scope.go:117] "RemoveContainer" containerID="0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5" Nov 25 15:40:27 crc kubenswrapper[4796]: E1125 15:40:27.772933 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5\": container with ID starting with 0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5 not found: ID does not exist" containerID="0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5" Nov 25 15:40:27 crc kubenswrapper[4796]: I1125 15:40:27.773041 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5"} err="failed to get container status \"0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5\": rpc error: code = NotFound desc = could not find container \"0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5\": container with ID starting with 0d6d08a9e4f3c67aa3ef12a26c16152d85921b57d42cf5a6bd0ee94a8d424df5 not found: ID does not exist" Nov 25 15:40:28 crc kubenswrapper[4796]: I1125 15:40:28.437621 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4111079e-cce1-4697-8608-067b6ea68dde" path="/var/lib/kubelet/pods/4111079e-cce1-4697-8608-067b6ea68dde/volumes" Nov 25 15:40:36 crc kubenswrapper[4796]: I1125 15:40:36.410113 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:40:36 crc kubenswrapper[4796]: E1125 15:40:36.411237 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:40:47 crc kubenswrapper[4796]: I1125 15:40:47.409560 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:40:47 crc kubenswrapper[4796]: E1125 15:40:47.410569 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:40:50 crc kubenswrapper[4796]: I1125 15:40:50.008036 4796 scope.go:117] "RemoveContainer" containerID="67469e417b6291b0f3a3f4248ffed99ecc29d5c4b9d4357e0acc47194563cb7c" Nov 25 15:41:01 crc kubenswrapper[4796]: I1125 15:41:01.409137 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:41:01 crc kubenswrapper[4796]: E1125 15:41:01.410227 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:41:16 crc kubenswrapper[4796]: I1125 15:41:16.410863 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:41:16 crc kubenswrapper[4796]: E1125 15:41:16.411689 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:41:27 crc kubenswrapper[4796]: I1125 15:41:27.409870 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:41:27 crc kubenswrapper[4796]: E1125 15:41:27.410776 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:41:39 crc kubenswrapper[4796]: I1125 15:41:39.410171 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:41:39 crc kubenswrapper[4796]: E1125 15:41:39.410937 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-h6xfl_openshift-machine-config-operator(c683b765-b1f2-49b1-b29d-6466cda73ca8)\"" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" podUID="c683b765-b1f2-49b1-b29d-6466cda73ca8" Nov 25 15:41:51 crc kubenswrapper[4796]: I1125 15:41:51.409840 4796 scope.go:117] "RemoveContainer" containerID="e7cd5f45b2d2856440381aa8ce91304dcffcbe77311e339481556ad76dc5e927" Nov 25 15:41:52 crc kubenswrapper[4796]: I1125 15:41:52.523345 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-h6xfl" event={"ID":"c683b765-b1f2-49b1-b29d-6466cda73ca8","Type":"ContainerStarted","Data":"a6dcb3607eabf7e8618cdfe9596b7fe55b285a8d82bcfa4d694f5a309aa5c425"} Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.772968 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ft5jr"] Nov 25 15:42:07 crc kubenswrapper[4796]: E1125 15:42:07.774485 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4111079e-cce1-4697-8608-067b6ea68dde" containerName="extract-utilities" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.774509 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4111079e-cce1-4697-8608-067b6ea68dde" containerName="extract-utilities" Nov 25 15:42:07 crc kubenswrapper[4796]: E1125 15:42:07.774544 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4111079e-cce1-4697-8608-067b6ea68dde" containerName="registry-server" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.774558 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4111079e-cce1-4697-8608-067b6ea68dde" containerName="registry-server" Nov 25 15:42:07 crc kubenswrapper[4796]: E1125 15:42:07.774610 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4111079e-cce1-4697-8608-067b6ea68dde" containerName="extract-content" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.774625 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4111079e-cce1-4697-8608-067b6ea68dde" containerName="extract-content" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.775022 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4111079e-cce1-4697-8608-067b6ea68dde" containerName="registry-server" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.777792 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.796110 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ft5jr"] Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.871888 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-utilities\") pod \"redhat-marketplace-ft5jr\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.872052 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-catalog-content\") pod \"redhat-marketplace-ft5jr\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.872117 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2l8f\" (UniqueName: \"kubernetes.io/projected/23756358-be00-4315-a0f2-1416047ab9a5-kube-api-access-f2l8f\") pod \"redhat-marketplace-ft5jr\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.973316 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-catalog-content\") pod \"redhat-marketplace-ft5jr\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.973396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2l8f\" (UniqueName: \"kubernetes.io/projected/23756358-be00-4315-a0f2-1416047ab9a5-kube-api-access-f2l8f\") pod \"redhat-marketplace-ft5jr\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.973480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-utilities\") pod \"redhat-marketplace-ft5jr\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.974049 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-catalog-content\") pod \"redhat-marketplace-ft5jr\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.974071 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-utilities\") pod \"redhat-marketplace-ft5jr\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:07 crc kubenswrapper[4796]: I1125 15:42:07.993080 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2l8f\" (UniqueName: \"kubernetes.io/projected/23756358-be00-4315-a0f2-1416047ab9a5-kube-api-access-f2l8f\") pod \"redhat-marketplace-ft5jr\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:08 crc kubenswrapper[4796]: I1125 15:42:08.128032 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:08 crc kubenswrapper[4796]: I1125 15:42:08.697683 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ft5jr"] Nov 25 15:42:08 crc kubenswrapper[4796]: I1125 15:42:08.714971 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ft5jr" event={"ID":"23756358-be00-4315-a0f2-1416047ab9a5","Type":"ContainerStarted","Data":"c604c38d3fa38580883a17fb7800debcab7cefab37664ec6b570e5d45fd47e25"} Nov 25 15:42:09 crc kubenswrapper[4796]: I1125 15:42:09.725834 4796 generic.go:334] "Generic (PLEG): container finished" podID="23756358-be00-4315-a0f2-1416047ab9a5" containerID="7852e4805b4113624f18a8fd32195c509333c2910e2a375b8fcc6f35944214f2" exitCode=0 Nov 25 15:42:09 crc kubenswrapper[4796]: I1125 15:42:09.725902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ft5jr" event={"ID":"23756358-be00-4315-a0f2-1416047ab9a5","Type":"ContainerDied","Data":"7852e4805b4113624f18a8fd32195c509333c2910e2a375b8fcc6f35944214f2"} Nov 25 15:42:10 crc kubenswrapper[4796]: I1125 15:42:10.735407 4796 generic.go:334] "Generic (PLEG): container finished" podID="23756358-be00-4315-a0f2-1416047ab9a5" containerID="b118fdebabb18aed03f2508359daf34f66cd80e2850c4b98131f3ea2987c8388" exitCode=0 Nov 25 15:42:10 crc kubenswrapper[4796]: I1125 15:42:10.735482 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ft5jr" event={"ID":"23756358-be00-4315-a0f2-1416047ab9a5","Type":"ContainerDied","Data":"b118fdebabb18aed03f2508359daf34f66cd80e2850c4b98131f3ea2987c8388"} Nov 25 15:42:11 crc kubenswrapper[4796]: I1125 15:42:11.747642 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ft5jr" event={"ID":"23756358-be00-4315-a0f2-1416047ab9a5","Type":"ContainerStarted","Data":"028433e8e5590c644abbaae0ba1c9ac11dd20a675ba3f8c0af86fba7eeef62f8"} Nov 25 15:42:11 crc kubenswrapper[4796]: I1125 15:42:11.771637 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ft5jr" podStartSLOduration=3.307962549 podStartE2EDuration="4.77161875s" podCreationTimestamp="2025-11-25 15:42:07 +0000 UTC" firstStartedPulling="2025-11-25 15:42:09.729519719 +0000 UTC m=+4658.072629143" lastFinishedPulling="2025-11-25 15:42:11.19317588 +0000 UTC m=+4659.536285344" observedRunningTime="2025-11-25 15:42:11.767472279 +0000 UTC m=+4660.110581713" watchObservedRunningTime="2025-11-25 15:42:11.77161875 +0000 UTC m=+4660.114728184" Nov 25 15:42:18 crc kubenswrapper[4796]: I1125 15:42:18.128395 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:18 crc kubenswrapper[4796]: I1125 15:42:18.128934 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:18 crc kubenswrapper[4796]: I1125 15:42:18.180441 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:18 crc kubenswrapper[4796]: I1125 15:42:18.884554 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:18 crc kubenswrapper[4796]: I1125 15:42:18.940110 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ft5jr"] Nov 25 15:42:20 crc kubenswrapper[4796]: I1125 15:42:20.854125 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ft5jr" podUID="23756358-be00-4315-a0f2-1416047ab9a5" containerName="registry-server" containerID="cri-o://028433e8e5590c644abbaae0ba1c9ac11dd20a675ba3f8c0af86fba7eeef62f8" gracePeriod=2 Nov 25 15:42:21 crc kubenswrapper[4796]: I1125 15:42:21.864608 4796 generic.go:334] "Generic (PLEG): container finished" podID="23756358-be00-4315-a0f2-1416047ab9a5" containerID="028433e8e5590c644abbaae0ba1c9ac11dd20a675ba3f8c0af86fba7eeef62f8" exitCode=0 Nov 25 15:42:21 crc kubenswrapper[4796]: I1125 15:42:21.864708 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ft5jr" event={"ID":"23756358-be00-4315-a0f2-1416047ab9a5","Type":"ContainerDied","Data":"028433e8e5590c644abbaae0ba1c9ac11dd20a675ba3f8c0af86fba7eeef62f8"} Nov 25 15:42:21 crc kubenswrapper[4796]: I1125 15:42:21.864871 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ft5jr" event={"ID":"23756358-be00-4315-a0f2-1416047ab9a5","Type":"ContainerDied","Data":"c604c38d3fa38580883a17fb7800debcab7cefab37664ec6b570e5d45fd47e25"} Nov 25 15:42:21 crc kubenswrapper[4796]: I1125 15:42:21.864887 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c604c38d3fa38580883a17fb7800debcab7cefab37664ec6b570e5d45fd47e25" Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.171803 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.258859 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2l8f\" (UniqueName: \"kubernetes.io/projected/23756358-be00-4315-a0f2-1416047ab9a5-kube-api-access-f2l8f\") pod \"23756358-be00-4315-a0f2-1416047ab9a5\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.259073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-utilities\") pod \"23756358-be00-4315-a0f2-1416047ab9a5\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.259172 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-catalog-content\") pod \"23756358-be00-4315-a0f2-1416047ab9a5\" (UID: \"23756358-be00-4315-a0f2-1416047ab9a5\") " Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.260546 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-utilities" (OuterVolumeSpecName: "utilities") pod "23756358-be00-4315-a0f2-1416047ab9a5" (UID: "23756358-be00-4315-a0f2-1416047ab9a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.265528 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23756358-be00-4315-a0f2-1416047ab9a5-kube-api-access-f2l8f" (OuterVolumeSpecName: "kube-api-access-f2l8f") pod "23756358-be00-4315-a0f2-1416047ab9a5" (UID: "23756358-be00-4315-a0f2-1416047ab9a5"). InnerVolumeSpecName "kube-api-access-f2l8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.280086 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23756358-be00-4315-a0f2-1416047ab9a5" (UID: "23756358-be00-4315-a0f2-1416047ab9a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.361545 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.361603 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23756358-be00-4315-a0f2-1416047ab9a5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.361617 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2l8f\" (UniqueName: \"kubernetes.io/projected/23756358-be00-4315-a0f2-1416047ab9a5-kube-api-access-f2l8f\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.875102 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ft5jr" Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.905677 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ft5jr"] Nov 25 15:42:22 crc kubenswrapper[4796]: I1125 15:42:22.915781 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ft5jr"] Nov 25 15:42:24 crc kubenswrapper[4796]: I1125 15:42:24.419063 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23756358-be00-4315-a0f2-1416047ab9a5" path="/var/lib/kubelet/pods/23756358-be00-4315-a0f2-1416047ab9a5/volumes"